Facing Poverty, Adjunct Professors turn to Prostitution

[youtube https://www.youtube.com/watch?v=-3ZxA0VCnSo]

The Rise of Emotionally Intelligent AI

I have studied emotional intelligence as a hobby for a long time. Until recently, I believed emotional intelligence to remain one of the core advantages of us humans after artificial intelligence has taken over all tasks requiring memorization and logic.
During the past few years, I’ve focused my studies on emotionally intelligent algorithms, as it is the business of my startup, Inbot .
The more I have researched them, the more convinced I have become that people are no longer ahead of AI at emotional intelligence.
Yuval Noah Harari writes in his best-selling book Homo Deus that humans are essentially a collection of biological algorithms shaped by millions of years of evolution. He continues to claim that there is no reason to think that non-organic algorithms couldn’t replicate and surpass everything that organic algorithms can do.
The same is echoed by Max Tegmark in his book Life 3.0: Being Human in the Age of Artificial Intelligence . He makes a compelling case that practically all intelligence is substrate independent.
Let that sink in for a moment. Our emotions and feelings are organic algorithms that respond to our environment. Algorithms, that are shaped by our cultural history, upbringing and life experiences. And they can be reverse engineered.
If we agree with Dr. Harari, who is a professor at the Hebrew University of Jerusalem, and Dr. Tegmark, who is a professor at MIT in Boston, computers will eventually become better at manipulating human emotions than humans themselves.
People are generally not emotionally intelligent
In real life situations, we are actually pretty bad at emotional intelligence.
Most of us are ignorant about even the most basic emotional triggers we set off in others. We end up in pointless fights, dismiss good arguments because they go against our biases, and judge people based on stereotypes.
We don’t understand the effects of cultural context, family upbringing or the current personal life situation of our discussion partner.
We rarely try to put ourselves in the other person’s position. We don’t try to understand their reasoning if it goes against our worldview. We don’t want to challenge our biases or prejudices.
Online, the situation is much worse. We draw hasty and often mistaken conclusions from comments by people we don’t know at all, and lash at them if we believe their point goes against our biases.
Lastly, we have an evolutionary trait of seeing life as the “survival of the fittest”. This predisposes us from taking advantage of others, to focus on boosting our egos, and to put ourselves on a pedestal.
The most successful people often lie to gain advantage, manipulate to get ahead, and deceive to hide their wrongdoings. It’s about winning at all costs, causing a lot of emotional damage on the way.
AI is advancing rapidly at emotional intelligence
While us humans continue to struggle to understand each other, emotionally intelligent AI has advanced rapidly.
Cameras in phones are ubiquitous and omnipresent, and face-tracking software is already advanced enough to analyze the smallest details of our facial expressions. The most advanced ones can even tell apart faked emotions from real ones.
In addition, voice recognition and natural language processing algorithms are getting better at figuring out our sentiment and emotional state from the audio.
The technologies to analyze emotional responses from faces and voice are already way beyond the skills of an average human, and in many areas exceed the abilities of even the most skilled humans.
Artificial Intelligence can look at our faces to recognize such private qualities as your sexual orientation, political leaning or IQ .
While AI can decipher almost any emotion from your face or speech, we haven’t yet put a lot of effort in scientific study of emotionally intelligent AIs .
The advances in this field are currently almost solely driven by commercial interests and human greed.
Media and entertainment companies need our attention and engagement to make money. Companies like Facebook and YouTube have a large number of engineers working to create ever better ways to addict us to their content.
I wrote about this in earlier in a short post named The worrying growth of the business of addiction .
These algorithms are designed to pull our emotional triggers to keep us entertained. And they have become very, very good at it.
Some of the core developers of these algorithms have gotten scared of the power technology has on us, and say our minds can be hijacked .
Big data gives an edge to emotionally intelligent AIs
Unlike people, AI can leverage your whole online history, which in most cases is more information than anybody can remember about any of their friends.
Some of the most advanced machine learning algorithms developed at Facebook and Google have already been applied on a treasure trove of data from billions of people.
These algorithms already know what your desires, biases and emotional triggers are, based on your communication, friends and cultural context. In many areas, they understand you better than you know yourself.
The progress of algorithms has gone so far that Facebook and Google are now accused of creating filter bubbles that can effect public opinion, rapidly change political landscapes and sway elections.
These algorithms are getting so complex that they are becoming impossible to fully control by humans. Facebook’s chief of security Alex Stamos recently tweeted that journalists are unfairly accusing them for manipulation, when in reality there are no solutions available that wouldn’t lead to someone accusing them of bias.
The future of emotional artificial intelligence
People have a lot of biases, which cloud our judgment. We see the world as we wish it to be, not as it is. Algorithms today, being made by people, incorporate some hints of our biases too. But if we wanted to remove such biases, it would be relatively easy to do.
As artificial intelligence gets better at manipulating us, I see a future where people happily submit their lives to the algorithms. We can already see it in practice. Just look around yourself in public — almost everyone is glued to the their smartphones.
Today, people touch their phones on average 2,617 times a day.
We are approaching an era, when artificial intelligence uses humans as organic robots to realize its goals. To make that happen, thousands of engineers are already building an API to humans.
The second part of this series is called The Human API .
Berlin, 9.10.2017
Mikko Alasaarela
I look forward to debating this interesting topic with you. Please comment and share!
My company Inbot is among the pioneers that leverage AI algorithms to offer real long term monetary value to humans for their data and services. We exist to counter the trend of intelligent machines enslaving humans, and to provide human opportunity in the age of artificial intelligence.
Join our Ambassador Community to earn long term dividends by introducing innovative AI businesses to customers.
Join 30,000+ people who read the weekly 🤖Machine Learnings🤖 newsletter to understand how AI will impact the way they work.

This content was originally published here.

Artificial intelligence in business | McKinsey

The executive’s playbook for artificial intelligence in business: Size the AI opportunity in your industry using our market data, understand the data and technology you need to seize it, and learn AI best practices to realize the value.

This content was originally published here.

AI Computer Chip ‘Smells’ Danger, Could Replace Sniffer Dogs

AI computer chip ‘smells’ the world

Researchers for Cornell University and Intel produced a “neuromorphic” chip called Loihi that reportedly makes computers think like biological brains, according to Daily Mail.

The researchers created the circuit on the chip, mirroring organic circuits found in the olfactory bulbs of a dog’s brain, which is how they process their sense of smell.

The Loihi chip can identify a specific odor on the first try and even differentiate other, background smells, said Intel, according to Daily Mail.

The chip can even detect smells humans emit when sick with a disease — which varies depending on the illness — and smells linked to environmental gases and drugs.

Computer chips out-sniffing sniffer dogs

The key to sniffer dogs isn’t their olfactory system alone, but their incredible ability to remember — this is why they’re trained. Similarly, the artificial intelligence of the chip is trained to identify disparate smells and remember, so that next time, it knows.

The chip processes information just like mammal brains by using electrical signals to process smells. When a person smells something, the air molecules interact with nasal receptors that forward signals to the olfactory bulb in the brain.

Then the brain translates the signals to identify which smell it’s experiencing, based on memories of previous experiences with the specific smell.

“We are developing neural algorithms on Loihi that mimic what happens in your brain when you smell something,” said Senior Research Scientist in Intel’s Neuromorphic Computing Lab, Nabil Imam, in a statement, according to Daily Mail.

Imam added that the work “demonstrates Loihi’s potential to provide important sensing capabilities that could benefit various industries.”

So far, the researchers have trained it on ten noxious smells, including ammonia, methane, and acetone. It can be installed on robots in airports to help identify hazardous objects, or integrated with sensors in power plants or hospitals to detect dangerous gases.

Similar biotechnology has seen implementation in grasshoppers recently outfitted with computer chips to sniff-out bombs. However, this negatively affects their lifespan, limiting their use.

While sniffer dogs might one day be out of a job, the circuits using AI to mimic the process of smell brings us one step closer to recreating the human sensorium in artificial intelligence.

This content was originally published here.

What Is the G20? – The New York Times

OSAKA, Japan — The annual Group of 20 summit meeting, which brings together President Trump and other world leaders, is intended to foster global economic cooperation. But with so many top officials in one place, it also serves as an all-purpose jamboree of nonstop formal and informal diplomatic activity.

This year’s meeting takes place in Osaka, Japan, on Friday and Saturday, and the official agenda includes trade, artificial intelligence, women’s empowerment and climate change. If the members can reach consensus on those subjects and others, they will produce an official joint declaration at the end.

That might not be easy: President Emmanuel Macron of France, in a challenge to the United States, has threatened not to sign any joint statement that does not adequately address climate change. And Mr. Trump, before reaching Osaka, lashed out at Japan, Germany and India — all American allies.

Mr. Trump’s primary focus, however, is likely to be on a series of one-on-one meetings with foreign leaders. On Friday, he is to sit down with President Vladimir V. Putin of Russia, with whom he hopes to refresh relations, and on Saturday, he is to meet with President Xi Jinping of China, with whom he planned to discuss a trade standoff that has spooked global markets.

This content was originally published here.

Ice Hockey World Championships feature world’s first virtual sports anchor – CNN

The Swedish hockey team’s main sponsor Svenska Spel — a state-owned company operating in Sweden’s regulated gambling market — is behind the idea.
In November last year, China’s state news agency debuted the first ever 24/7 virtual anchor, but Grönborg’s digital clone is the first with a sole focus on sport.
The real Rikard Grönborg (right) with the virtual Rikard Grönborg.
From the first game on May 10 right up until the final on May 26, the Swedish language live stream will deliver 408 hours of news, interviews, and analysis around the world championships to fans back home.
“We spent hours filming myself in different positions and situations,” Grönborg, who led Sweden to back-to-back titles in 2017 and 2018, tells CNN Sport of the painstaking process to create his virtual duplicate.
“Then we recorded my voice saying all kinds of different, very, very weird sentences that didn’t really make sense when I recorded it … The filming was four or five hours and I think recording of my voice saying different sentences was at least three plus hours.
“I think the end product is pretty good and it’s obviously built an interest with the fans out there that want to follow our team in the world championships.”
On top of having Grönborg front the news stream, the concept appeals to fans by drawing upon 20 years of ice hockey data to deliver stats and predict results.
Despite suffering defeats by the Czech Republic and Russia — the former being Sweden’s first defeat for 17 games in the world championships — Grönborg’s side defeated Italy, Norway, Austria, Switzerland, and Latvia to set up a quarterfinal clash against Finland.
Sweden’s players celebrate next to a fan during a 5-4 victory over Latvia.
The coach is confident his team can go the distance for a third consecutive tournament.
“Obviously we’re still going for gold,” he says. “We’ve been having great success over the last couple of years doing that. I think we definitely have a team among other great hockey nations here. We have a very good shot of going all the way.”
Recent years have seen a surge in the use of artificial intelligence, with the media industry just one of many examples of AI being deployed in the place of humans. A 2019 study by the Brookings Institute estimated that 25 percent of jobs in the US are at high risk of being replaced by AI and automation.
On the website willrobotostakemyjob.com — which evaluates the likelihood of professions being replaced by AI and robots — “reporters and correspondents” returns an 11 percent chance of losing its human touch, while “radio and television announcers” gives 10 percent.
Grönborg’s voice is recorded for the virtual reality broadcast.
So has the virtual Rikard Grönborg got CNN’s sports anchors fearing for their future?
“In my experience, viewers often want a personality they can trust to bring them their sports news or sports coverage,” says CNN World Sport anchor Alex Thomas, “someone genuine and knowledgeable and somebody they can relate to.
“I think, for now, that’s still a human rather than a virtual anchor.
“I have always worked with the philosophy that it’s not about me, it’s about the consumer experience. So, if in the future a virtual anchor or AI robot is better at presenting the sport than an actual person, then I wouldn’t have a problem with it.”
The immediate future for anchors and reporters seems safe, then.
“At the moment I don’t see such developments as a threat to most TV anchors, because there is much more to sports presenting than simply reading the results,” Neil Thurman, a professor of communication at LMU Munich, tells CNN.
“Virtual anchors also face the ‘uncanny valley’ effect — whereby viewers are uncomfortable with virtual presenters that are almost, but not quite, human.
“I see more potential for automation in sports-news videos in so-called text-to-video technology. This technology can produce short news videos from text stories, automatically choosing clips and pictures to match the story, which is told using captions or a computer-generated voice. Captions in particular avoid the ‘uncanny valley’ effect.”

This content was originally published here.

How Did Google Wipe Out 700,000 Malicious Android Apps From Play Store? Using Artificial Intelligence

For their mass takedown of bad Android apps categorized as copycats, apps with inappropriate content, and potentially harmful apps (PHA), Google credits their machine learning-based detection models and techniques which help take action against bad Android apps and identify repeat offenders.

This content was originally published here.

Forex Robot Multi Currency Scalper.

Forex Robot Multi Currency Scalper
Intelligent Forex Robot Multi Currency Scalper (EA) is 100% automated trading robot can select the best possible trades out of 28 symbols. Based on low-risk strategy. Ensures trades are entered at the best possible times. Has advanced money management system.
Which timeframe can Forex Robot Scalper be used?
The Forex Robot recommended to be used on M1 and M5 timeframes. The Forex Robot Multi Currency Scalper monitors ALL currency pairs and await signals on their behalf, on a single opened up chart. Several profit trades, from all the different symbols, can add up to the profit, while keeping the drawdown lower.
How Does The Forex Robot Multi Currency Scalper Work
For each of the currencies the Forex Robot analyzes correlation, volatility based on artificial intelligence algorithm, detects strong trend direction and performs buy trades at lower price and sell trades at higher price.
Intelligent Forex Trading Robot analyzes the market twenty four hours every day for profitable trading opportunities. Trading buy/sell signals generated by powerful artificial intelligentce based system, all signals pass filters that makes the Forex Robot extremely accurate and secure.
Download version. After payment is made you will get download link. Free technical support via Email, Skype and Teamviewer.

This content was originally published here.

BPOs to feel artificial intelligence ‘reality’ in next 3 to 5 years: Pernia

The use of artificial intelligence in the business process outsourcing will hit ‘harder’ in the ‘next 3 to 5 years,’ Socioeconomic Planning Sec. Ernesto Pernia said Wednesday, as he urged the industry to upgrade the skills of its workers.

This content was originally published here.

Baguio runs further tests on Huawei COVID-19 test machine | ABS-CBN News

MANILA — Baguio City in the northern Philippines is running further tests on technology from China’s Huawei that can detect “highly probably” coronavirus infections using artificial intelligence, its mayor said Friday.

There were technical glitches during the first trial last Tuesday and the test took 50 minutes instead of the ideal 2 minutes. The testing time was later reduced to 30 minutes and to 7 minutes on Thursday, Baguio Mayor Benjamin Magalong said.

“Mabuti na ‘yong sigurado tayo,” Magalong said of the Huawei COVID-19 CT scan.

Should the Huawei scan detect a viral infection, patients will undergo further testing by getting fluids from the nose, also called a swab test, Magalong said.

This content was originally published here.

The simplest explanation of machine learning you’ll ever read

You’ve probably heard of machine learning and artificial intelligence , but are you sure you know what they are? If you’re struggling to make sense of them, you’re not alone. There’s a lot of buzz that makes it hard to tell what’s science and what’s science fiction. Starting with the names themselves…

Machine learning is a thing-labeler, essentially.

I’m a statistician and neuroscientist by training, and we statisticians have a reputation for picking the driest, most boring names for things. We like it to do exactly what it says on the tin. You know what we would have named machine learning? The Labelling of Stuff using Examples!
Contrary to popular belief, machine learning is not a magical box of magic, nor is it the reason for $30bn in VC funding. At its core, machine learning is just a thing-labeler , taking your description of something and telling you what label it should get. Which sounds much less interesting than what you read on Hacker News. But would you have gotten excited enough to read about this topic if we’d called it thing-labeling in the first place? Probably not, which goes to show that a bit of marketing and dazzle can be useful for getting this technology the attention it deserves (though not for the reasons you might think).

It’s phenomenally useful, but not as sci-fi as it sounds.

What about artificial intelligence (AI)? While the academics argue about the nuances of what AI is and isn’t , industry is using the term to refer to a particular type of machine learning. In fact, most of the time people just use them interchangeably, and I can live with that. So AI’s also about thing-labeling. Were you expecting robots? Something sci-fi with a mind of its own, something humanoid? Well, today’s AI is not that. But we’re a species that sees human traits in everything. We see faces in toast, bodies in clouds, and if I sew two buttons onto a sock, I might end up talking to it. That sock puppet’s not a person, and neither is AI — it’s important to keep that in mind. Is that a letdown? Chin up! The real thing is far more useful.
Let me show you why you should be excited. What do you see in the photo?
You just took in some pretty complex data through your senses and, as if by magic, you labeled it ‘cat.’ That was so easy for you! How about if we wanted a computer to do the same task, to classify (label) photos as cat/not-cat?

Machine learning is a new programming paradigm, a new way of communicating your wishes to a computer.

In the traditional programming approach, a programmer would think hard about the pixels and the labels, communicate with the universe, channel inspiration, and finally handcraft a model. A model’s just a fancy word for recipe, or a set of instructions your computer has to follow to turn pixels into labels.
But think about what those instructions would be. What are you actually doing with these pixels? Can you express it? Your brain had the benefit of eons of evolution and now it just works, you don’t even know how does it. That recipe is pretty hard to come up with.

Explain with examples, not instructions.

Wouldn’t it be better if you could just say to the computer, “Here, look at a bunch of examples of cats, look at a bunch of examples of not-cats, and just figure it out yourself”? That is the essence of machine learning. It is a completely different programming paradigm. Now, instead of giving explicit instructions, you program with examples and the machine learning algorithm finds patterns in your data and turns them into those instructions you couldn’t write yourself. No more handcrafting of recipes!

AI allows you to automate the ineffable.

Why is that exciting? This is about expressing our wishes to computers in a way we couldn’t before. We love to get computers to do stuff for us. But how can we possibly give instructions if the instructions are really hard to think up? If they’re ineffable?
AI and machine learning are about automating the ineffable. They’re about explaining yourself using examples instead of instructions. This unlocks a huge class of tasks that we couldn’t get computers to help us with in the past because we couldn’t express the instructions. Now all of these tasks become possible — machine learning represents a fundamental leap in human progress. It is the future and the future is here!
This article has been translated to 🇦🇪 Arabic , 🇨🇳 Chinese , 🇳🇱 Dutch , 🇫🇷 _ French _, 🇩🇪 German , 🇮🇹 _ Italian _, 🇯🇵 Japanese , 🇵🇱 Polish , 🇧🇷 _ Portuguese _, 🇵🇹 _ Portuguese _, 🇷🇺 _ Russian _, 🇪🇸 Spanish , and 🇹🇷 _ Turkish _.

This content was originally published here.