Hyundai developing injury-diagnosing AI technology | CarAdvice

Hyundai is working with a Korean artificial intelligence (AI) specialist on technology capable of diagnosing injuries after a car accident.

Using inbuilt sensors, Hyundai says the system will quickly (within seven seconds of impact) build a picture of what’s happening in the cabin after a crash, before sharing that information – and information about which safety systems activated in the car – with emergency services.

It will also scan the car for damage, and create a detailed report of what’s wrong for the manufacturer. Hyundai says it’ll be able to design safer, stronger cars by knowing more about what’s damaged across a range of accident types.

At the moment, the artificial intelligence company (MDGo) is working to train the system’s brain by comparing its injury assessments with “real data on patients’ injuries”. This process of teaching the software is called ‘iterative enhancement’.

There’s no timeline provided on when this technology could roll out in Hyundai production cars.

This content was originally published here.

Estonia approves the action plan for implementing AI

The e-Estonia council on 5 June approved the action plan for implementing artificial intelligence, or the so-called “kratt” strategy.

“Kratt” is a creature
from Estonian mythology that the government of Estonia uses as a synonym for
narrow AI applications. Its rough translation would be something like a
“gremlin” or a “goblin”.

The group of
specialists who developed the action plan for artificial intelligence advises
to encourage its testing as widely as possible, because this allows to identify
the areas that would benefit most from kratts.

“We must also map the
public services that would benefit most from enhancing the implementation of
kratts and develop ways for cooperating the kratt solutions,” the government
said in a statement.

Greater digitalisation is needed

Currently, 16 kratts
have been implemented in the Estonian public sector, but this number will
increase to 50 next year, the government said.

“For the purpose of
applying artificial intelligence in the private sector successfully, greater
digitalisation of business operations is required,” the government asserted. “Public
awareness must be raised on artificial intelligence solutions along with
knowledge in applying them. There is no need for a separate ‘kratt’ act, but
legislation must be adjusted accordingly.”

The e-Estonia council manages the development of Estonian information society and the digital state, assembles specialists and work groups upon necessity and orders analyses in the field of information and communication technology policy.

Cover: A robot called Pepper, the first robot student admitted to the University of Tartu in 2018.

This content was originally published here.

Tony Blair and John Major: two bugs trying to halt the Maybot

What possessed the Maybot’s handlers to let her answer questions at the end of her Mansion House speech on the UK’s negotiations with the EU? Even the most advanced forms of artificial intelligence can be made to look silly when confronted with non-computable inquiries. So it was on Friday when, aft

This content was originally published here.

First Listen: Artificial Intelligence – ‘Good Things’ (Metalheadz)

First Listen: Artificial Intelligence – ‘Good Things’ (Metalheadz)

Drum and bass heavy-weights, Artificial Intelligence return to Metalheadz. 

Made up of Zula Warner and Glenn Herweijer, Artificial Intelligence are a duo who really need little introduction. Emerging in the early 2000’s, they have since gained prolific status in the drum and bass scene; the success found early on in their career only building, release on release. First appearing on Metalheadz Platinum back in 2012, AI soon became a regular fixture with the equally revered drum and bass label. Their 2015 debut album ‘Timeline’ being a particular highlight on the Metalheadz main label.

They now make their return with ‘Signs’ EP, three years on since their previous EP ‘Reprisal’. In true AI style the forthcoming release compiles five elevated, spatially aware dancefloor cuts. Today, we have track ‘Good Things’ for your listening pleasure. With a shimmering, heavenly aura, at the heart of ‘Good Things’ lies a breath-taking, tear-jerking liquid rhythm. A fine display of the multi-layered excellence of Artificial Intelligence, the soon to be ‘Signs’ EP is yet another special addition to their weighty discography.

Artificial Intelligence ‘Signs’ EP is out 11th September via Metalheadz. 

Grab it here

This content was originally published here.

Robots Inform Artificial Intelligence Researchers That They’ll Take It From Here

The A.I. research team at MIT is hailing it as a breakthrough in their field that will finally allow them to kick back and relax a little bit. We have the latest on what the now-sentient robotic life forms have planned next.

 You can find The Topical on Apple Podcasts, Spotify, Google Podcasts, and Stitcher.

This content was originally published here.

AI rejects conservative human views on furniture, designs wacky chair

If you’re lucky enough to be in Italy for Milan Design Week this year, do yourself a favor and check out the world’s first “chair designed using artificial intelligence to be put into production.” With language that specific you know it must be interesting. Kartell, Philippe Starck, and Autodesk, a 3-D software company, collaborated on …

This content was originally published here.

Out of shape? Why deep learning works differently than we thought

If you look at the image below, which animal do you see?
You probably won’t have any trouble identifying a cat in the image above. Here is what a top-notch deep learning algorithm sees: an elephant !
This story is about why artificial neural networks see elephants where humans see cats. Moreover, it’s about a paradigm shift in how we think about object recognition in deep neural networks — and how we can leverage this perspective to advance neural networks. It is based on our recent paper at ICLR 2019, a major deep learning conference.
How do neural networks recognize a cat? A widely accepted answer to this question is: by detecting its shape. Evidence for this hypothesis comes from visualization techniques like DeconvNet (examples shown below), suggesting that along the different stages of processing (called layers), networks seek to identify increasingly larger patterns in an image, from simple edges and contours in the first layers to more complex shapes such as a car wheel — until the object, say, a car, can be readily detected.
This intuitive explanation has entered the status of common knowledge. Modern deep learning textbooks such as the classic “Deep Learning” book by Ian Goodfellow explicitly refer to shape-based visualization techniques when explaining how deep learning works, as do other influential researchers like Nikolaus Kriegeskorte (p. 9) :

“The network acquires complex knowledge about the kinds of shapes associated with each category. […]

High-level units appear to learn representations of shapes occurring in natural images, such as faces, human bodies, animals, natural scenes, buildings, and cars.”

But there is a problem: Some of the most important and widely used visualization techniques, including DeconvNet, have recently been shown to be misleading: instead of revealing what a network looks for in an image, they merely reconstruct image parts — that is, those beautiful human-interpretable visualizations have little to do with how a network arrives at a decision.
That leaves little evidence for the shape hypothesis. Do we need to revise the way we think about how neural networks recognize objects?
What if the shape hypothesis is not the only explanation? Beyond the shape , objects typically have a more or less distinctive color, size and texture . All of these factors could be harnessed by a neural network to recognize objects. While color and size are usually not unique to a certain object category, almost all objects have texture-like elements if we look at small regions — even cars, for instance, with their tyre profile or metal coating.
And in fact, we know that neural networks happen to have an amazing texture representation — without ever being trained to acquire one. This becomes evident, for example, when considering style transfer . In this fascinating image modeling technique, a deep neural network is used to extract the texture information from one image, such as the painting style. This style is then applied to a second image, enabling one to “paint” a photograph in the style of a famous painter. (You can try it out yourself here !)
The fact that neural networks acquire such a powerful representation of image textures despite being trained only on object recognition suggests a deeper connection between the two. It’s a first evidence for what we call the texture hypothesis : textures, not object shapes, are the most important aspects of an object for AI object recognition.
How do neural networks classify images: based on shape (as commonly assumed) or texture? In order to settle this dispute, I came up with a simple experiment to find out which explanation is more plausible. The experiment is based on images like these ones below, where shape and texture provide evidence for distinctly different object categories:
In these three example images, texture and shape are no longer from the same category. We created them with style transfer: the same technique used to “paint” a photograph in the style of van Gogh can be used to create a cat with the texture of an elephant, if the input is a photograph of elephant skin instead of a painting.
Using images like these, we can now investigate shape or texture biases by looking at classification decisions from deep neural networks (and humans for comparison). Consider the following analogy: We would like to find out whether someone speaks Arabic or Chinese, but we are not allowed to talk to them. What could we do? One possibility would be to take a piece of paper, write “go left” in Arabic, next to it “go right” in Chinese, and then simply observe whether the person would walk right or left. Similarly, if we show an image with conflicting shape and texture to a deep neural network, we can find out which “language” it speaks by observing whether it makes use of the shape or the texture to identify the object (that is, whether it thinks the cat with elephant texture is a cat or an elephant).
This is precisely what we did. We conducted a series of nine experiments encompassing nearly a hundred human observers and many widely used deep neural networks (AlexNet, VGG-16, GoogLeNet, ResNet-50, ResNet-152, DenseNet-121, SqueezeNet11), showing them hundreds of images with conflicting shapes and textures. The results left little room for doubt: we found striking evidence in favor of the texture explanation! A cat with elephant skin is an elephant to deep neural networks, and still a cat to humans. A car with the texture of a clock is a clock to deep neural networks, as much as a bear with the surface characteristics of a bottle is recognized as a bottle. Current deep learning techniques for object recognition primarily rely on textures, not on object shapes.
Here is one exemplary result for ResNet-50, a commonly used deep neural network, showing the percentage of its first three “guesses” (classification decisions) below the image:
As you can see, the cat with elephant skin is classified as an elephant based on the texture, rather than as a cat based on its shape. Current AI object recognition seems to work a lot differently than we previously assumed, and is fundamentally different from how humans recognize objects.
Isthere anything we can do about this? Can we make AI object recognition more human-like, can we teach it to use shapes instead of textures?
The answer is yes. Deep neural networks, when learning to classify objects, make use of whatever information is useful. In standard images, textures reveal a lot about object identities, hence there may simply be no need to learn a lot about object shapes. If the tyre profile and glossy surface already give the object identity away, why bother checking whether the shape matches, too? This is why we devised a novel way to teach neural networks to focus on shapes instead of textures, in the hope to eliminate their texture bias. Again using style transfer, it is possible to exchange the original texture of an image for an arbitrary different one (see figure below for examples). In the resulting images, the texture is no longer informative and thus the object shape is the only useful information left. If a deep neural network wants to classify objects from this new training dataset, it now needs to learn about shapes.
After training a deep neural network on thousands and thousands of these images with arbitrary textures, we found that it actually acquired a shape bias instead of a preference for textures! A cat with elephant skin is now perceived as a cat by this new shape-based network. Moreover, there were a number of emergent benefits. The network suddenly got better than its normally-trained counterpart at both recognizing standard images and at locating objects in images; highlighting how useful human-like, shape-based representations can be. Our most surprising finding, however, was that it learned how to cope with noisy images (in the real world, this could be objects behind a layer of rain or snow) — without ever seeing any of these noise patterns before! Simply by focusing on object shapes instead of easily distorted textures, this shape-based network is the first deep neural network to approach general, human-level noise robustness.
Atthe crossroads of human visual perception and artificial intelligence, inspiration can come from both fields. We used knowledge about the human visual system and its preference for shapes to better understand deep neural networks, learning that they primarily use textures to classify objects. This led to the creation of a network that more closely resembles robust, human-like performance on a number of different tasks. Looking ahead, if this network turns out to predict more accurately how neurons in the brain “fire” when we look at objects, it could be very useful to better understand human visual perception—thus, in this truly exciting age, inspiration from human vision has the potential to improve today’s AI technologies just as much as AI has the capabilities to advance today’s vision science!
The link below leads to the full paper on which this article is based.
I mageNet-trained CNNs are biased towards texture; increasing shape bias improves accuracy and robustness ** Robert Geirhos, Patricia Rubisch, Claudio Michaelis, Matthias Bethge, Felix A. Wichmann & Wieland Brendel.
_If not stated otherwise, images and figures are taken from this publication; the respective image rights mentioned there apply accordingly.

**This story is published in Noteworthy, where 10,000+ readers come every day to learn about the people & ideas shaping the products we love.

Follow our publication to see more product & design stories featured by the Journal team.

This content was originally published here.

Social Places releases AI early-warning system for Coronavirus sentiment and online reviews

Leading MarTech company Social Places has expedited the functionality of their artificial intelligence feature to go live with the ‘Red Flagging’ of any urgent topics including: disease, Coronavirus, COVID-19, hygiene, racisms, theft, assault, drugs as well as other important keywords that could have brand damaging implications for South African companies…

This content was originally published here.

Beyond Artificial Intelligence: Investing in Deep Learning – Ticker Tape

The Deep Learning for Robotics summit June 28-29 could provide a sense of how this important technology is being utilized by major tech firms.

This content was originally published here.

Intel Demos Ambient PC That Observes You and Adapts to Your Needs – ExtremeTech

For a good while, you’d rarely find a computer without an Intel CPU. But with the rise of GPU-centric processing in blockchain and AI, as well as ARM’s dominance in mobile computing, Intel has struggled greatly to keep up in recent years. With a renewed necessity to innovate, Intel’s announced a variety of strange ideas over the past few years. The latest addition—an ambient PC prototype—lands somewhere in the valley between cool and creepy.

Image credit: Intel

In a recent announcement, Intel showed off a handful of prototype devices at this year’s Computex event in Taipei that demonstrate its efforts in artificial intelligence, modular, and ambient computing. But what is ambient computing, exactly? The term refers to a more responsive breed of electronics that observe and react to the presence of people. These devices remain on at all times, always watching, with the goal of adapting to and serving our needs. Futurist and communications marketing executive Gary Grossman explains how this technology will ideally fit into our lives:

Ambient computing covers applications incorporating machine learning and other forms of artificial intelligence and is characterized by human-like cognitive and behavioral capabilities and contextual awareness. It creates a digital environment in which companies integrate technology seamlessly and invisibly into everything around us, maximizing usefulness while minimizing demands on our attention.”

It’s not hard to consider the ways that ambient computing could pose a more significant threat to owners than current smart home technology when everything from lightbulbs to toilets have become targets in recent years.  That said, let’s start with the positive aspects ambient computing can provide. Much like how our smartphones anticipate the apps we’ll search for or the tasks we wish to perform, ambient computing attempts to anticipate us by learning from how we use connected technologies in our homes.

Image credit: Intel

Intel also hopes to increase usefulness and efficiency through this technology by introducing more “closed-lid” tasks that machines can perform to minimize the transition from sleeping states to waking states. Ambient devices can remain connected when not in use to download information users may want without delay, much like we’ve seen with Apple’s Power Nap feature introduced several years ago. Leveraging a local voice ID technology, presence sensors, and 180-360 degree cameras, Intel also expects ambient computing will improve the quality of video conferencing and provide more security while enabling “hands-free” conveniences.

While that all may sound nice in a perfect world, we live in a reality filled with many security exploits that have likely already affected you and people you know in some way. No matter how much we attempt to secure our own data, we cannot control or even anticipate the vulnerabilities many companies leave open. While Intel attempts to proactively address these issues—which is more than can be said for most companies—it’s had its issues in the past. With necessary Spectre and Meltdown patches hitting Intel processor performance the hardest, security and performance require a delicate balance that may prove difficult for a company in a hurry to compete.

Speculation aside, Intel continues to assert its commitment to user privacy specifically in regards to ambient computing, and unexpected security flaws will occur regardless of anyone’s best efforts. It’s not alone in developing ambient computing devices and the first exploit may target another company’s technology. In most cases, the potential flaws in the components produced by large microchip companies don’t represent the best target for malicious hackers. Connected technologies remain more vulnerable by nature. Each connection is a link in a chain of multiple products from different companies and it only takes one weak link to break it. Even with a perfect approach, Intel’s technology can only assert so much control over the other links in its chains. It will be important to keep these risks in mind as more intelligent, adaptive devices reach the consumer market.

Image credit: Intel

Along with the ambient computing prototypes demoed at Computex, Intel announced an AI on PC development kit, in partnership with Asus and a NUC Compute Element that continues their efforts to modularize PC building. While ambient computing may feel like a technology surrounded by risk, these parallel announcements demonstrate Intel’s interest in making development more accessible. Today our purchases of smart home devices can feel like putting a black box of convenience in the home and leave us wondering if we’ve relinquished our privacy or put ourselves at other risks. In the future, however, the creation of ambient devices may become a more simple task with modular parts. When we have a hand in creating the technology ourselves, we can retain more control over our security and privacy.

This content was originally published here.