“Incorporating established physics into neural network algorithms helps them to uncover new insights into material properties
According to researchers at Duke University, incorporating known physics into machine learning algorithms can help the enigmatic black boxes attain new levels of transparency and insight into the characteristics of materials.
Researchers used a sophisticated machine learning algorithm in one of the first efforts of its type to identify the characteristics of a class of engineered materials known as metamaterials and to predict how they interact with electromagnetic fields.
The algorithm was essentially forced to show its work since it first had to take into account the known physical restrictions of the metamaterial. The method not only enabled the algorithm to predict the properties of the metamaterial with high accuracy, but it also did it more quickly and with additional insights than earlier approaches.”
Teaching Physics to AI Can Allow It To Make New Discoveries All on Its Own (scitechdaily.com)
What’s the pop culture expression for this….ah yes, ‘Skynet Smiles’.
“America lay in ruins. Even it’s overweight platinum blonde transsexual Admiral playing the role of assistant chief doktor to doktor Faustus had resigned.”
Years later after archives were opened the calamity was attributed to the fact that the AI bots who directed foreign policy had become secretly alcoholics and they had approved a large scale invasion of Russia during the winter.
I began working with a brilliant and eccentric friend over 15 years ago whose AI algorithms already showed their work. That ability is what drew me to his AI. He is now doing very well for himself with major government contracts. He tried the pure commercial world for a few years and was severely disappointed with their single minded focus on making more money. I’m going to ask him about this Duke University work.
It is more interesting for the media and of course Hollywood to portray AI as equivalent to human intelligence. The reality is that training a neural net requires lots of data and the quality of the predictions are directly proportional to the quality of the data. The best current example of AI is training a neural net to distinguish between images of cats and dogs. With a lot of training it can get pretty good. Similarly it can get good at playing Go where the rules are finite and explicit. In essence current AI takes historical data and learns patterns and consequently can make good predictions if historical patterns hold with new data. Current AI is not good for problem domains where historical patterns don’t always hold and that is particularly true with human behavior which on occasion doesn’t mirror past behavior.
We are far from software that can provide predictions and decisions where the variability is random or non-linear.
Combining neural nets with physics of mechanical systems and other systems that follow the laws of physics can provide superior predictive systems for those domains.
Are “materials” animate properties with souls?
No thanks, I’m with Ludd on this one.
“Any sufficiently advanced technology is indistinguishable from magic” – Arthur C. Clarke
Interesting that in this case the algorithm had to show its work. A problem with what is sometimes termed ‘AI’ is the black box issue – i.e. an answer pops out, but even the programmers have no way of working out how it was arrived at.
Douglas Adams brilliantly parodied our obsession with AI in Deep Thought; the super-duper computer that provided the answer to life, the universe and everything: “42”. Clarke was also of course responsible for the HAL 9000, whose emotionless, unblinking eye (above) Kubrick so brilliantly depicted in his 1968 classic.
Our determination to recreate God in the form of an intelligence superior to our own is a bad idea. Kissinger, Professor Stephen Hawking & other Very Smart People agree. We can’t say we haven’t been warned. Do we really want to risk the pod bay doors being closed on mankind?
A follow up comment, with your indulgence:
Have the bright young things at Duke (or has anyone else) considered incorporating known ethics into their machine learning algorithms?
Whose ethics? I’m running for the north woods when someone tries to get AI to learn theology.
Theology is a different matter, but better the machine learns it from us than the other way around, no?
Whose ethics indeed. If we really can’t answer that question my expectation is it will eventually become moot once AI is all pervasive. A universal utilitarian system is the certain result of the wholesale delegation of decision-making to AI, which is where we are rapidly headed. What “ethics” will even mean at that point is a question no one seems to be interested in. Finally we’ll have what Nietzsche described as the yoke for the thousand necks.
Like Asimov’s 3 laws of robotics? If memory serves R. Daneel Olivaw had a lot to do with what happened to humanity. BTW “Deep Thought” got it wrong, the answer is 41. Just ask Ben Hur. Row well, and live. Happy Day to all of you.
I was getting my MS in Comp Sci at Nav PG School back in a prior boom in AI — this was based on symbolic reasoning. The goal was creating “expert machines” and do things like understanding natural language. It was pretty much a disappointment then. So I am a bit of a skeptic that deep learning will do all its promoters say it will. This report sounds interesting, as a problem with deep learning is that it is impossible to understand why it works.
Will HAL have a soul like this young Harvard woman, who calmly addresses her classmates about their anti-Americanism, or will its AI be among those who do not speak up?
(Six well spent minutes)
in the late ’80s, a fairly brilliant friend of mine was finishing off his PhD in CS. I had been a PM in an engineering biz running realtime-sync’d sensor networks distributed throughout urban infrastructure. churned out lotsa data. pre-PCs, we rolled all our own hw & sw. naturally, I was curious about what was happening in inexpensive / high performance / general purpose computers that we might leverage in making life easier for ourselves, & I’d read of this Artificial Intelligence concept applied to chess. he told me (paraphrasing from memory), “it’s the latest New Thing… CS academics want to program computers to do sw development so they can make shitloads of $ without doing much actual work.” made perfect sense to me. until these AI engines independently generate measurably useful AI product, I’m not worrying too much, but then it’s only been 35 yrs or so.
correction: “… generate measurably useful AI engines of their own, I’m not worrying…”
Having another set of eyes (Ais ?) on existing physics problem could help us move forward with fundamental discoveries. Of course we’ll use it also to control all sorts of military HW and that could be very bad.
For all our failings and a few close calls, we’ve still been able to avoid annihilating ourselves. That could change, if we eventually let AIs think for us.
“Having another set of eyes (Ais ?) on existing physics problem could help us move forward with fundamental discoveries. “
Very, very unlikely, because current and future AI is glorified statistics and pattern matching, very effective for well defined problems but hopelessly at loss for forming concepts, this is the famous “AI black box” problem, a working response but no justification/explanation.
And, YES this could be used to drive military HW or other simple minded cretinous endeavours like genetic manipulations with disastrous results.
AI as it stands has zero capability of creative thinking which is the basis of scientific paradigms evolution, it is currently currently force fed at the level of Kant whereas quantum mechanics and relativity are already beyond the grasp of Kant ideas and are sorely lacking in explanatory power v/s cutting edges problems.
(I am closely following AI progress, a hundred papers a year or so)