Browse Definitions :

How common is commonsense AI?

'Machines like Us' co-author Ron Brachman explains what it would take to build common sense into AI systems and the benefits of doing so.

Currently, AI systems are good at specialized tasks. However, they spontaneously fail in peculiar ways, ways that a human likely wouldn't. Automated cars sit at broken stoplights indefinitely waiting for the light to turn green, image processing software mistakes pictures of buses for pictures of ostriches and Alexa instructs young children to touch a penny to metal prongs plugged into an electrical socket.

No matter how good AI is in some respects, its repeated display of a missing common sense erodes our trust in it.

Machines like Us co-author Ron Brachman defines common sense, explains how machines might come to achieve it and explains the challenges experts have faced along the way.

Editor's note: This Q&A has been edited for length and clarity.

Ron BrachmanRon Brachman

How do you define common sense?

Ron Brachman: We're not always correct. We may not be expert about some things, and even when we are, we make mistakes and miss some obvious things. But, by and large, when we answer questions or try to solve problems or plan actions, we're sort of reasonably right almost all the time. It doesn't take a rocket scientist to get around in everyday life and prepare dinner and drive to work and walk the dog and things like that.

A lot of these commonsensical things we do every day are based on just knowing and observing stuff that we've seen all of our lives. That works regularly. We have a feeling of what causes what. If I push something, it'll fall over. We get to the point where it's so automatic we don't really think about it. And, if you were to interview someone -- which became a practice in the field -- to extract from their head their knowledge about how to do something, there are all these obvious things that we would forget to articulate. People breathe, objects fall when you drop them, hard surfaces will stop and objects fall or hold something up. If you look around you and you tried to think of all of the kinds of rules of thumb that operate in the world that you live in everyday, there's a huge number, and we never really think of them explicitly.

Machines Like Us book coverClick on book image to
purchase.

There was one project that started in roughly 1984 -- that's still going -- trying to build a humongous knowledge base called Cyc, comprised largely of commonsense facts and rules. The number of people-hours that have gone into building this knowledge base is just mind-boggling.

Are there any commonsense AI systems, or close to it, in the world today?

Brachman: You're probably aware that there's been a gold rush over the last 10 years toward what people call machine learning -- systems that you can train from very large training sets of examples. Some of those systems that have been built -- especially recently -- are just incredibly impressive.

I don't know if you've played with ChatGPT. It's really quite amazing in some ways, and then it does make some goofy mistakes. Like, if you ask it what the gender of the first female president will be, it'll say, 'I'm sorry, I can't predict who the first female president will be, so I don't know that person's gender.' It's not a life-threatening blunder, but that's a real goof. You look at that and say, 'A fifth-grader would get that right. What's going on here?'

So, most of what these things do is quite remarkable. But you can't predict when they're going to come up with something that's shocking, and I think that's really the problem. People make mistakes, some are more trustworthy than others, some make more mistakes. But, generally speaking, if you have a reasonable idea of somebody's competence, you kind of know what you'll get if you put them in a certain situation.

The way these things are built, there's some things that they just don't do that they would have to be able to do to acquire and use common sense. The problem right now, at least with machine learning-based systems that are kind of neural nets, they do a great job of processing large amounts of data and kind of form generalized pattern detectors, but you can't talk to them about what they know. And, typically, you can't give them little bits of advice, like there's no right turn in this town or I've been through this intersection a few times recently and there's something quirky about this light. With the architectures they use now, it really cuts out that kind of possibility.

How did you begin to get the machines to mimic people?

Brachman: You put something in a situation where it's got to make a decision on what to do. There are different outcomes, at least some of which just depend on normal human experience. You have to think about them a little bit. You don't just follow your nose.

Daniel Kahneman, who's this cognitive psychologist, wrote a really well-received book called Thinking Fast and Slow based on years of research. He did his own observations about human decision-making and flaws in human decision-making, which is kind of his expertise. He implies that the way humans work, they operate with kind of two different systems. One is very fast, intuitive, almost reflexive, pattern-driven, like face recognition. He uses the term 'system one.'

The other one is very thoughtful. You can do math, you can do serious long-term planning and a lot of complicated thinking with what he called 'system two.' You can easily imagine the kind of AI things I talked about fitting into these two boxes: the neural net, reactive, quick-recognizer in the system one box and a much more thoughtful, symbolic, logical engine in the system two box.

One of the things that we've started to think about is it's not just two things on the end, but there's a continuum, and there's other kinds of thinking in between. Kenneth Hammond's -- another cognitive psychologist -- account of common sense is in the middle, which I think it is. He calls it 'quasi-rationality.' It's some intuition -- the rapid stuff -- and it's some analysis -- the slower stuff -- together.

Where would commonsense AI be useful were it to exist?

Brachman: I would say the places where this really matters, where a system is going to be autonomous and it's going to run without direct human oversight and control, is in any kind of situation where any harm can be caused. That can be physical harm, it could be financial harm, it could be misunderstandings. You got to get this right; you got to have common sense and act the way people do because, frankly, stuff happens.

Self-driving cars are the most obvious ones because you're driving heavy vehicles that are kind of dangerous weapons, both to the people inside and people and property on the outside. They can be very dangerous if something goes wrong.

There's so many bizarre things that might only happen once in your lifetime. But many, many of those kinds of things happen all the time. Driving to work you probably see stuff almost every day that you've literally never ever seen before.

[Co-author] Hector [Levesque] likes to use the example of a squirrel that comes up to the side of the road holding a thimble. You can imagine somebody dropped some sewing stuff on the street, and this thing found this. If it happened to you, you would immediately know it's not dangerous, and it's kind of funny, and you wouldn't stop what you're doing, right? But it'll probably never ever happen to you, and most of the people you know again in your entire life. The world is so complicated -- so much can happen that you can't afford to let a system out there on its own unchecked if it could do any harm.

So, real autonomy is necessary. It'd be the same thing for undersea exploration or robot recovery of people and things in disaster locations, like rubble from an earthquake, right? Many of the current machines for that have tethers, and so a human can actually see what the robot is seeing and give it controls. If you're doing decision-making from a machine, even if it's all software, it depends what harm can be done from a bad decision.

Next Steps

The need for common sense in AI systems

Dig Deeper on Artificial intelligence

Networking
  • net neutrality

    Net neutrality is the concept of an open, equal internet for everyone, regardless of content consumed or the device, application ...

  • network scanning

    Network scanning is a procedure for identifying active devices on a network by employing a feature or features in the network ...

  • networking (computer)

    Networking, also known as computer networking, is the practice of transporting and exchanging data between nodes over a shared ...

Security
CIO
  • IT budget

    IT budget is the amount of money spent on an organization's information technology systems and services. It includes compensation...

  • project scope

    Project scope is the part of project planning that involves determining and documenting a list of specific project goals, ...

  • core competencies

    For any organization, its core competencies refer to the capabilities, knowledge, skills and resources that constitute its '...

HRSoftware
  • employee onboarding and offboarding

    Employee onboarding involves all the steps needed to get a new employee successfully deployed and productive, while offboarding ...

  • succession planning

    Succession planning is the strategic process of identifying and developing internal candidates to fill key organizational roles ...

  • recruitment

    Recruitment is the process of finding, screening, hiring and onboarding qualified job candidates.

Customer Experience
  • chatbot

    A chatbot is a software or computer program that simulates human conversation or "chatter" through text or voice interactions.

  • martech (marketing technology)

    Martech (marketing technology) refers to the integration of software tools, platforms, and applications designed to streamline ...

  • transactional marketing

    Transactional marketing is a business strategy that focuses on single, point-of-sale transactions.

Close