Given all the promises we hear about AI, it might be a surprise to learn that researchers are strongly divided on the question of how the field should develop. The split is between proponents of traditional, logic-based AI, and enthusiasts of neural network modeling. As computer scientist Michael Woolridge puts it in a concise survey of the dispute, “should we model the mind or the brain?”
It’s not quite as simple as that of course (with AI, it never is). But hopefully it’s not impossible to explain the difference.
Symbolic AI. AI has its historic roots in a thought experiment published by Alan Turing, and known as the “Turing Test.” Without diving into the detail, the test was supposed to provide a criterion for judging success in modeling human intelligence — the mind.
Successfully modeling intelligence was the chief goal of AI for decades. “Symbolic AI” refers to the widely held assumption that human intelligence is reducible to logical statements — the kind which can be captured by symbolic logic.
This approach allowed AI to make enormous strides in dealing with areas of human intelligence well-circumscribed by clearly defined rules. That included mathematical calculations, of course, and — famously — chess. The problem was that a lot of human thinking doesn’t make those rules explicit, even if they underlie our thought processes. Traditional AI lagged in recognizing patterns, and therefore understanding images. And good luck drawing up a set of rules for skills like hitting a baseball or riding a bike. We learn to do those things (or not), but without studying a set of statements describing the actions involved.
Deep learning. An alternative approach to AI is sometimes mis-described as based on modeling networks in the human brain. Rather, it takes inspiration from how human neural networks are thought to work. Again, without going too deep, large, artificial networks of “nodes,” trained on substantial data sets, learn to recognize statistical relationships in the data; and a feedback loop between the layers of nodes creates the possibility of self-correction. The sheer scale of processing, and the multiple layers of nodes, gives this approach the name “deep learning.”
It was precisely the scale which hindered the development of this approach. Until relatively recently, there wasn’t enough data or enough computer power to make deep learning both practicable and cost-effective. But that changes, and that’s why in recent years, we’ve seen rapid improvement, for example, in AI image recognition.
The drawback, especially when it comes to understanding texts, has been that this immensely powerful engine is essentially flying blind. It will recognize enormous quantities of correlations in the data it’s fed, and respond accordingly. Since it doesn’t intelligently understand the data, errors — or, as people have observed, prejudices — will simply become more engrained unless a human steps in to correct matters.
In simple terms, a deep learning system gluttonous enough to absorb the whole internet would, in doing so, absorb a lot of nonsense, some of it pernicious. This approach, it’s as well to add, also leaves a large carbon footprint.
What are the implications? Does any of this matter to marketers? To the extent that marketing is not invested in the project of modeling human intelligence, perhaps not — although it does explain that the notion of entrusting business strategy or planning to AI is a long way off.
If there’s an upside, this rift between symbolic AI and deep learning supports what might otherwise seem wishful thinking: that AI can support some functions within marketing — campaign optimization, personalization, data management — while freeing marketers to be strategic and creative.
It’s not so much a matter of marketers hoping that AI won’t take their jobs. It’s that AI isn’t close to being ready to do so.
This story first appeared on MarTech Today.
The post Does the rift in AI matter to marketing? appeared first on Marketing Land.
0 Comments