ICYMI: Study Says Artificial Intelligence Industry Must Confront ‘Diversity Disaster’ | Colorlines

The artificial intelligence (AI) industry—which is overwhelmingly populated by White people and men—is due for a reckoning with its diversity crisis, according to a new report released on Tuesday (April 16) by the AI Now Institute at New York University

The authors of “Discriminating Systems: Gender, Race and Power in AI” call AI’s lack of diversity a “disaster.” Women make up just 18 percent of authors at AI conferences, 15 percent of research staff at Facebook and 10 percent at Google, according to the report. Black workers make up only 2.5 percent of Google employees, and 4 percent at Facebook and Microsoft, respectively. The study notes that much of the data views gender as a binary and the “overwhelming” focus on women in diversity privileges White women.

The AI industry largely discusses this lack of diversity as a “pipeline problem,” referring to the way people are hired. But the study says companies need to stop placing the burden of addressing the diversity crisis on those who experience discrimination and instead look at the perpetrators.

Per the report, “pipeline” research has yet to lead to meaningful action:

A recent survey of 32 leading tech companies found that though many express a desire to improve diversity, only 5 percent of 2017 philanthropic giving was focused on correcting the gender imbalance in the industry, and less than 0.1 percent was directed at removing the barriers that keep women of color from careers in tech. This meant that out of $500 million in total philanthropic giving by these companies that year, only $335,000—across 32 tech companies—went to programs focused on outreach to women and girls of color. The AI sector must confront the racist underpinnings of systems that are designed for the classification, detection and prediction of race and gender, which harkens back to histories of “race science.” And it must reconsider the production, selection and distribution of products that give power to those who benefit most from these products.

The industry must also reconsider the production of products that work in favor of the powerful, perpetuate racism and benefit the carceral state. The report draws several examples, including image recognition systems that miscategorize Black people, Uber’s facial recognition that does not identify trans drivers, chatbots that adopt racist and misogynistic language and sentencing algorithms that discriminate against Black defendants.

“Systems that use physical appearance as a proxy for character or interior states are deeply suspect,” the report states. “Such systems are replicating patterns of racial and gender bias in ways that can deepen and justify historical inequality.”

The study goes on to highlight the significance of recent worker-led initiatives and actions that have shaken up the tech industry to implement change. Among them is last year’s Google Walkout, in which hundreds of Google employees staged protests against what they described as a toxic and discriminatory work environment. 

“As AI systems are embedded in more social domains, they are playing a powerful role in the most intimate aspects of our lives: our health, our safety, our education and our opportunities,” the report concludes. “It’s essential that we are able to see and assess the ways that these systems treat some people differently than others, because they already influence the lives of millions.”

Read the full report here.

This content was originally published here.

Forum on Artificial Intelligence and Machine Learning

This forum will convene experts in the AI and machine learning fields to discuss the future of these technologies and their implications for the communications marketplace. The event will also include demonstrations to enable the public to see these emerging technologies in action.

This content was originally published here.

How artificial intelligence could push us closer to nuclear war

As AI slowly erodes the foundations that made the Cold War possible, we may find ourselves hurtling towards all-out nuclear war. There’s a “significant potential” for artificial intelligence to undermine the foundations of nuclear security, according to a new report published by the RAND Corporation, a nonprofit,

This content was originally published here.

Would you watch a movie written and animated by artificial intelligence?

The next time you sit down to watch a movie, the algorithm behind your streaming service might recommend a blockbuster that was written by AI, performed

This content was originally published here.

Russia’s Quest to Lead the World in AI Is Doomed – Defense One

In 2017, Russian President Vladimir Putin famously stated that whoever becomes the leader in artificial intelligence “will become the ruler of the world.” Most experts on technology and security would agree with Putin about the importance of AI, which will ultimately reshape healthcare, transportation, industry, national security, and more. Nevertheless, Moscow’s recognition of AI’s importance will not produce enough breakthroughs to obtain the technological edge that it so deeply desires. Russia will ultimately fail in its quest to become a leader in AI because of its inability to foster a culture of innovation. 

Russia’s anxieties about competing in the information age are far from new. In 1983, then-Soviet Minister of Defense Nikolai Ogarkov lamented to the New York Times that in the United States, “small children — even before they begin school — play with computers….here we don’t even have computers in every office of the Ministry of Defense.” The Soviets were concerned about Ronald Reagan’s Strategic Defense Initiative, a land and space-based missile defense system, in part due to its artificial intelligence-enabled battle management system. In short, the Soviets feared that they would be unable to compete as the information revolution accelerated. 

Russian Prime Minister Dmitry Medvedev has shared many of Ogarkov’s concerns about modern technology for the entirety of his political career. In 2010, Medvedev established Skolkovo Technopark, Russia’s own version of Silicon Valley, outside Moscow to foster innovation and develop breakthroughs in emerging technologies. Within five years, Skolkovo had more than 30,000 people working on a modern campus that closely resembled Google headquarters. Residents of Skolkovo received investments from Microsoft, IBM, and Intel. Nevertheless, due to corruption and state interference, many of the top innovators in Skolkovo have fled Russia and are now working in the U.S. and Europe. 

Endemic corruption, no protections for private property, and a pervasive state security apparatus make Russia a very difficult environment for innovation to flourish. Scientists want to collaborate with researchers around the world who are making headway in their respective fields. In Russia, the state has traditionally impeded the free flow of knowledge across its borders because Moscow views uncontrolled information as a political and national security threat. 

Yet Russian leaders seem not to have learned from the difficulties with Skolkovo. In February, Putin announced that the Russian government will publish an AI strategy by the middle of June 2019. Unsurprisingly, much of Moscow’s focus is on using AI to improve Russia’s military capabilities. Last year, the Russian Ministry of Defense organized a competition to foster breakthroughs in the field. Additionally, there is an Artificial Intelligence Association that is considering the broad impacts of AI on society. This month, it is a key sponsor of a conference aimed at developing technologies to expand the prowess of the Russian armed forces. Regardless the Russian government’s AI innovation efforts will ultimately not succeed for the same reasons that Skolkovo has failed. 

The Russian government will devote the preponderance of its AI resources to defense and national security. Thus, researchers are going to be heavily censored by the Russian security services. It will become increasingly difficult for Russian academics to have unfettered access to their Western colleagues due to security concerns. Additionally, any developments in the AI arena will be appropriated by the state, creating a disincentive for commercial investment. Thus, it is highly probable that much of Russia’s leading talent in fields relevant to AI research will leave, just like many of their Skolkovo colleagues, to work in countries that will enable them to achieve their goals. 

Russia’s political system and culture of corruption will prevent it from becoming a center of AI innovation. Ultimately, it will continue to fall farther behind the United States, China, and Western Europe in AI research and other advanced technologies. Just like in the 1980s, Russia is not equipped to effectively compete in a world that is so heavily shaped by the information revolution. 

Aaron Bateman is pursuing a PhD in the history of science and technology at Johns Hopkins University. He also served as a U.S. Air Force intelligence officer with assignments at the National Security Agency and the Pentagon. He has published on Russian foreign policy, technology, and diplomacy. 

This content was originally published here.

Google is killing off its Gmail substitute app before it could become popular

Inbox came with provisions for snoozing emails to later, trying latest artificial intelligence (AI)-powered experiences like Smart Reply, Nudges, high-priority notifications

This content was originally published here.

Robert Downey Jr. Really Wants to Save The World – KOSI 101

Robert Downey Jr. played Iron Man for the last 10 years who fought hard to save the world, had a really cool AI assistant named Jarvis and he himself was super knowledgeable with all things tech.   I guess it really rubbed off on Robert as he has taken a huge interest in really saving the world and using technology to do so.  He has shared his concern for global warming and the “mess” we humans leave behind – pollution.

In April of next year he plans to launch The Footprint Coalition and vows to dedicate the next 11 years to making a huge difference in our environment including global warming.  He also has a YouTube Red documentary coming out regarding artificial intelligence.

This content was originally published here.

Airbus mulls single-pilot flights as Artificial Intelligence could enable autonomous planes

Skift Take Airbus acknowledges that the “explainability” of artificial intelligence is an impediment to getting regulators to sign off on certain products. Passengers will definitely need some very good explainers, too.

Though autopilot is not a new technology, Airbus’s Chief Technology Office Grazia Vittadini said the company is hoping current advances in artificial intelligence will help complete the step to completely autonomous planes.

“That’s what we’re looking into, artificial intelligence, to free up pilots from more mundane routines,” Vittadini said in an interview with Accenture CTO Paul Daugherty at Munich’s Digital Life and Design conference Sunday.

Currently, the company is working on moving to single-pilot operations, with full autonomy coming later.

Join the 250,000 travel executives that already read our daily newsletter. Sign up below.

Airline executives, though reluctant to speak on the topic, would benefit from autonomous planes as they seek to cut costs and handle ongoing shortages of qualified pilots — two issues that could be addressed by efficiency improvements pilot-less planes would provide.

The biggest challenge for planemakers like Airbus is convincing regulators to approve the technology, Vittadini said.

“Explainability of artificial intelligence is a real challenge for us when it comes to the certification of products,” she said.

This content was originally published here.

Interpol Enlists Korean Startup to Track Crypto on the Dark Web

The International Criminal Police Organisation (Interpol) has announced a partnership with South Korean data intelligence startup, S2W Lab, to analyze dark web activity, including cryptocurrency transactions.

The startup announced the partnership on March 20 — with S2W Labs signing a one year contract with Interpol.

Interpol sets its sights on the dark web

S2W Lab claims to have “captured a massive amount of Dark Web data” and “established a Dark Web database.” The S2W examines the data using artificial intelligence to establish “links among multiple domains and among multiple timeframes.”

S2W boasts that it has secured several patents “on the subject of Dark Web and cryptocurrency” analysis.

Suh Sangduk, S2W Labs’ CEO, emphasized the challenges of responding to cybercrime on the dark web due to the “wide usage of cryptocurrencies.” 

He adds that the partnership will see S2W “cooperate with international investigations” to ensure that distributed ledger technologies are “used for good purposes.” 

S2W identifies black market for face-masks amid coronavirus panic

After the startup launched in September 2018, it developed methods of analysis alongside researchers from the Korea Advanced Institute of Science and Technology University.

On March 19, S2W Labs identified the formation of a black market for face-masks on Dark Web marketplaces.

The firm analyzed the prevalence of keywords pertinent to coronavirus across popular darknet markets, discovering that 10-packs of face-masks are frequently selling for between $85 and $170 on leading anonymous marketplaces.

On Feb. 20, S2W identified the personal information of 3 million Koreans that had been leaked onto the dark web. 

Interpol cracks down on cryptojacking

During January, Interpol announced that it had reduced the number of MikroTik routers infected with cryptojacking malware in South-East Asia by 78%.

Through a partnership with cybersecurity firm, Trend Micro, Interpol issued “Cryptojacking Mitigation and Prevention” guidance throughout the South-East Asian region.

The initiative resulted in the restoration of more than 20,000 affected routers.

This content was originally published here.

Will Artificial Intelligence Replace Pathologists, Radiologists, Microbiologists?

Artificial intelligence is getting really, really good. In fact, it has become so technologically advanced that some high-skilled jobs that we once believed were “robot-proof” actually are not. The biomedical profession is ripe for overhaul.

Consider a new paper in The Lancet Digital Health. Researchers developed an algorithm with 98% sensitivity and 97% specificity for detecting prostate cancer. In other words, of all the patients who really did have prostate cancer, the algorithm correctly identified 98%; of all the patients who did not have prostate cancer, the algorithm was 97% correct. That’s phenomenal accuracy. According to a press release, the algorithm identified six cases of cancer that pathologists had missed.

So, are pathologists going the way of elevator and telephone operators? Probably not, but the outlook isn’t fantastic, either. The authors of The Lancet study note that there has been a relative decline in the pathology workforce because the number of new cancer cases is outpacing the number of pathologists entering the field. So, demand for pathologists is increasing. However, AI likely will reduce the number of human pathologists that are actually needed. If, for instance, all the “easy” cases are diagnosed by computers, then that means pathologists (in combination with AI) would only be needed for the “harder” cases.

Radiologists are also in trouble. An article published in Nature in January of this year describes that AI outperforms radiologists in the detection of breast cancer. The algorithm was able to reduce the rate of both false positives and false negatives.

Not the Microbiologists, Too?

Your humble correspondent spent 10 years being trained in microbiology, first as an undergraduate then as a graduate student. At one point, I considered a career as a clinical microbiologist, in which I would be responsible for diagnosing infectious diseases.

In a traditional lab, an unknown bacterium is cultured and run through a series of metabolic tests to identify what it is. Identifying viruses is much harder. This is labor-intensive and time-consuming. So why do all this work when we now have machines that can isolate and sequence DNA, thereby identifying the microbe (including bacteria, viruses, and fungi) from its unique genetics? A company called Karius has developed a machine that can provide results within 24 hours of receiving a patient sample.

Is Any Job Robot-Proof?

While some jobs are safe for now, it appears that few if any jobs are truly robot-proof. There is even software that can write its own code, which means a robot could program itself or other robots. While I believe that fears of a Robot Apocalypse are far-fetched, self-programming computers indicate that even coders won’t be safe forever.

This content was originally published here.