
The Handy Artificial Intelligence Answer Book (The Handy Answer Book Series)
Author(s): A.G.G. Liu (Author), Aishwary Pawar Ph.D. (Author)
- Publisher: Visible Ink Press
- Publication Date: June 16, 2026
- Language: English
- Print length: 400 pages
- ISBN-10: 157859779X
- ISBN-13: 9781578597796
Book Description
What is artificial intelligence? How does it work? Who is behind it? What can we expect from it in the future? Good questions, and we have the answers!
Artificial intelligence is popping up everywhere: in Google searches, calls to customer service, and in healthcare, education, and personal assistants—to name just five. It’s rapidly changing businesses and our lives. Covering the basics, its history, and the science behind it, The Handy Artificial Intelligence Answer Bookis the perfect starting point to understanding the emerging AI revolution, what is currently possible, what might be possible in the future—and what’s just hype. Tackling what has been, what is, and what will be—and the “what ifs”—of AI, this informative book answers more than 1,400 questions, such as …
- What does the term “artificial intelligence” actually mean?
- What real-world AI tools do people use daily, like Alexa, Netflix, and facial recognition?
- How do AI algorithms adapt to changes in search trends and user behavior?
- What is AI-based image stabilization in smartphones and cameras?
- How does AI assist in surgical procedures and robotic surgery?
- How does the use of AI in surveillance affect human rights?
- What industries are most vulnerable to automation and AI-driven job loss?
- How can AI recognize human emotions and adapt its responses accordingly?
- What can we learn from other technologies as we consider how AI will shape our world?
- And hundreds more!
Informative, provocative, and intriguing, The Handy Artificial Intelligence Answer Booklooks at this emerging technology and what the past might tell us about the future. Ponder the potentials—and the perils—with this enlightening examination of AI!
Editorial Reviews
Editorial Reviews
About the Author
A.G.G. Liuis a writer and online educator based in New Jersey. Liu has reached millions of online learners on YouTube—both as the lead script writer for the AI and futurism channel Rational Animations, and through their own educational channel, Signore Galilei, which was started during college at Harvard. Liu co-authored the book 30-Second Space Travel with Dr. Charles Liu and Dr. Karen Masters and co-hosts the podcast The LIUniverse with Dr. Charles Liu. Residing in Bloomfield, New Jersey, Liu also enjoys singing, amateur radio, and strategy games.
Aishwary Pawar, Ph.D., is a Statistician in the University Decision Support office at Southern Methodist University, where his work focuses on applying advanced data analytics and predictive modeling to improve student retention, success, and institutional outcomes. He holds a Ph.D. in Industrial Engineering with a specialization in Operations Research and Engineering Education. Aishwary has co-authored The Handy Engineering Answer Book and is the author of Your First Code: Python Basics for Absolute Beginners. His research interests include predictive analytics, student success modeling, and ethical data use in higher education. He also teaches courses in data science and Python programming as an adjunct faculty member. Outside of work, he enjoys exploring global food cultures, collecting sneakers, and mentoring students in data-driven careers. He resides just outside Dallas.
Excerpt. © Reprinted by permission. All rights reserved.
What is AI?
If you’ve been paying attention to the news lately, the progress on artificial intelligence, or AI, is impossible to miss. From cars to chatbots to camera filters, AI is finding its way into more and more of our digital lives, and even our “offline” ones. But what even is AI, and what makes it both so exciting and so worrying?
Artificial intelligence broadly refers to the capability of any machine to process information. A particular machine that is good at doing so might itself be called an AI or an AI system. Today’s leading edge AI systems all run on digital processors – the ones that power your laptop or your cell phone, or their more powerful cousins – but an AI doesn’t always have to be digital. One of the first experimental AIs, called MENACE, was run by an operator moving small pegs in and out of cardboard boxes according to a list of instructions.
The reason AI has been in the news so much in the late 2010s and the 2020s is that researchers have made rapid progress on a kind of AI called “machine learning”, especially a variant called “deep learning”. Deep learning using today’s top supercomputers has allowed AI systems to crack a long list of problems that vexed researchers for decades. These include analyzing the content of photos and videos, uncovering the structures of the proteins that make your cells work, and being able to write a passable essay on any topic you can name. These days, when a tech company touts its latest product as being powered by AI, they usually mean an AI built using deep learning.
Machine learning is special because it’s a major departure from the way that computer programs have been built in the past. If you want a computer to perform a task, the typical way of telling the computer what to do is for someone to write a program with the directions the computer should follow every time in exacting detail.
As an example, take the computer chip in a very critical device: your coffee machine. Whoever wrote the chip’s software must have specified everything the machine can do. The beeps that play and the lights that shine when you press a button; how to use the data from the coffee maker’s digital thermometers or timer to tell when the coffee is ready; what to do if the machine gets unplugged; if it’s one of those fancy pod machines, how to read what kind of coffee pod you just put in – all of that needs to be specified by a human programmer writing in some form of computer code.
For something like operating a coffee machine, even a fancy one, the tasks are straightforward enough that a person or a team of people can write the program effectively. There’s only maybe half a dozen buttons on the machine that a person can press, and only a few sensors that the computer chip needs to pay attention to in order to make a decent cup of coffee, so human programmers can usually think through all the possibilities the machine might encounter.
By contrast, some tasks involve more information than a human could possibly sort through. Take the question of trying to recognize photos. Suppose you were trying to program a computer to separate pictures of dogs from pictures of cats. When a person looks at a picture, we immediately see objects within that picture. When a computer looks at a picture, it just sees a bunch of numbers representing the different colors of each of the pixels, and there are millions of those pixels in just one high-resolution photograph from your phone’s camera.
Adding to the problem, there are billions of different pictures of dogs and cats in the world, and even the same cat would look different from a different angle, with different lighting, or with a different background. There’s just no way a human programmer could think of all the different ways to tell a cat photo apart from a dog photo.
This is where machine learning comes in. Instead of a programmer designing code to distinguish every possible dog from every possible cat, the programmer instead makes a program that can “learn” over time what makes a cat different from a dog by going through thousands or even millions of examples.
This kind of learning isn’t as direct or sophisticated as when a human learns a new skill, or even when a dog learns a new trick. The computer just tweaking a bunch of equations according to specified rules until it starts giving answers that look right. The computer feeds the numbers of one digital photo after another through these equations. Every time the computer correctly identifies a dog or a cat, the equations are adjusted to do more of what went right; conversely, every time the computer makes a mistake the equations are adjusted to do less of what went wrong.
Pretty much all the recent advances in machine learning boil down to either more cleverly setting up those equations and the rules by which they change, getting more examples for the program to learn from, or getting a more powerful computer that can handle more complex equations and churn through examples faster. Just these very basic ideas, applied in creative ways on the world’s top supercomputers, can create systems able to navigate city streets, write news articles, or copy the voice of a dead pop star.
Today’s AI is a major shift in how computers complete tasks. Rather than relying on expert programmers, top AI systems now get better by sorting through huge amounts of real-world data. This new reality brings along a host of unanswered questions. Who owns that data and who gets to decide how it gets used? How smart can these kinds of programs get, and what will happen as they get smarter? In what ways will people use and misuse these powerful tools? These questions will decide our future in ways we can’t afford to ignore – and it’s exactly these questions that we’ll be tackling head on.
Allen Liuis a science author, podcaster, and video host based in New Jersey. He authored the book 30-Second Space Travel with Karen Masters and Charles Liu, which details the history and physics of spaceflight. Liu has published numerous science education videos on YouTube, where he runs the channel Signore Galilei, which has amassed more than six and a half million views. He writes for the channel Rational Animations, which specializes in AI and the future. He also co-hosts the science podcast The LIUniverse, and he has published software for visualizing astrophysics data in virtual reality. Allen earned his degree in math from Harvard, where he specialized in four-dimensional geometry and did work in biological and artificial intelligence. He resides in Montclair, New Jersey, where he also pursues his interests in singing, amateur radio, and 3D printing.
Wow! eBook


