An Artificial Intelligence Tool Made These Super-Realistic ‘Fake’ Photos

Artificial Intelligence algorithms have come a far way – from creating black and white images in 2014 to lifelike fake photos in 2018.

This content was originally published here.

Coronavirus outbreak triggers wave of apps, online tools for diagnosis, testing

As the number of infections caused by the spread of the Coronavirus continues to rise worldwide, medtech startups, healthcare organizations, and others are bringing applications and online services to the market to help people track the virus, check for symptoms, and offer advice on ways to help prevent exposure or even offer testing methods that limit exposure risk.

Here’s a roundup of just a few of the latest resources out there, which will be updated:

Created by practicing medical physicians and developed by Fast Pathway, the DocClocker app enables patients to receive real-time wait-time reporting of their medical providers, in theory helping to prevent the spread of the virus by enabling patients to avoid long waits in medical waiting rooms, thereby limiting exposure risks.

Blue Spark Technologies launched TempTraq, a single-use, disposable temperature monitor in the form of a soft patch that continuously monitors and records axillary temperature and wirelessly transmits real-time data for up to 72 hours. Once placed on a patient, clinicians can remotely monitor temperatures with little-to-no direct contact, helping eliminate potential cross-contamination from shared temperature measurement devices.

Potentially up to 35 million people living in the UK could consult their GP by video, accessing the service via the free Patient Access app, through the rapid release of video consultation software across its EMIS Web clinical system, used by 4,000 practices in England. The Video Consult service will be provided at no cost to practices for 12 weeks, and online training and support materials will also be provided.

Developed by Minneapolis-based Carrot Health, the COVID-19 Risk Index predicts populations and communities that are most susceptible to the negative impacts from a coronavirus outbreak. The aim is to help inform public health and intervention decisions at the national, regional and community levels by identifying who is most vulnerable.

Orion Health’s outbreak monitoring platform offers the ability to remotely monitor and engage patients in their homes, facilitating communication between quarantined people and the healthcare service, as well as maintaining visibility of those recently discharged. The platform will use artificial intelligence over time to allow providers to identify patients at risk of deterioration and optimize their care.

TytoHome. Developed by Tyto Care, this remote examination device enables patients quarantined in hospitals or isolated at home to perform clinic-quality self-examinations, and then connects them with physicians who can assess symptoms from a safe distance.

Nathan Eddy is a healthcare and technology freelancer based in Berlin.
Email the writer: nathaneddy@gmail.com
Twitter: @dropdeaded209

Healthcare IT News is a HIMSS Media publication.

This content was originally published here.

CfP: Int. Conference on Artificial Intelligence and its Legal Implications by ILNU, Ahmedabad [March 15-16]: Submit by Jan 15 – Lawctopus

ILNU Ahmedbad is organizing an International Conference on Artificial Intelligence and Its Legal Implications on 15th and 16th March 2019.

This content was originally published here.

goodyear unveils self-regenerating tyre concept with a rechargeable tread

goodyear has unveiled their latest concept car tyre featuring tread capable of regenerating itself. the recharge tyre has a biodegradable tread that can regenerate thanks to a special liquid compound made from a biological material and reinforced with fibres inspired by spider silk.

images courtesy of goodyear

when the tyre needs recharging, drivers can insert a new capsule containing this liquid compound into the centre of the wheel. depending on the their location, vehicle, what season they are driving in and how long their journey is, the tyre will then regenerate itself accordingly. thanks to artificial intelligence a driver profile would be created around which the liquid compound would be customized, generating a compound blend tailored to each individual.

in addition to radically simplifying the process of replacing your tires with rechargeable capsules, the tread would be supported by a light-weight, non-pneumatic frame and tall-and-narrow shape. this is a thin, robust low-maintenance construction that would eliminate the need for pressure maintenance or downtime related to punctures.

‘goodyear wants the tire to be an even more powerful contributor to answering consumers’ specific mobility needs,’ said mike rytokoski, vice-president and chief marketing officer, goodyear europe. ‘it was with that ambition that we set out to create a concept tire primed for the future of personalized and convenient electric mobility.’

project info

company: goodyear
name: recharge
status: concept

This content was originally published here.

Fears of OpenAI’s super-trolling artificial intelligence are overblown | New Scientist

Elon Musk-backed firm OpenAI has built a text-generating AI that it says is too dangerous to release because of potential misuse

This content was originally published here.

Millions of dollars in funding announced for breast cancer research | CTV News Montreal

Three major projects fighting breast cancer are getting an extra $10 million in funding from the Canadian Cancer Society and other partners.

The projects are a 3D printer that can reproduce a tumour from the cells of breast cancer patients, an initiative to use artificial intelligence to predict chemotherapy needs, as well as one to repair DNA to develop new drugs. 

Researchers and patients are excited about the funding injection.

“The 3D printer remakes this tumour microenvironment in the same manner as it exists in the patient,” said Morag Park of the Goodman Cancer Research Centre Director. “It’s this reconstituted tumour that allows us to test new drugs and therapies.”

Mei-Lin Yee knows firsthand how critical the new developments are.

Ten years ago, she was diagnosed with a triple-negative breast cancer.

“Over a period of five years, I had 174 chemo treatments and over five different lines of chemotherapy as the doctors tried to figure out which chemo would be able to work for me,” she said.

They’re all grateful and optimistic.

“Without that money, we go nowhere,” said Alain Nepveu of the Goodman Cancer Research Centre. “It does cost money to buy these libraries, to rent the robot that does the work, to have the experts that run the robots, to have the biochemist in the lab to analyze the results.”

“This will be uplifting, and this will give hope,” Yee said. “Really, that’s what it’s all about. It’s not only finding effective treatments, but also giving hope to people as they deal with the disease.” 

This content was originally published here.

Adobe enlists AI to establish self-healing ITSM | CIO

Adobe enlists AI to establish self-healing ITSM

Adobe has embraced AI, ML, NLP and other emerging technologies to improve the company’s service management — and pave the way to a self-healing ITSM framework.

Email a friend

Your message has been sent.

Artificial intelligence (AI), machine learning (ML) and natural language processing (NLP) are some of the hottest new technologies in IT service management. These technologies help companies streamline service management by automating business process and tasks within the ITSM framework.

Adobe provides a shining example. The creativity software maker has used AI, ML and NLP to help “change the dynamic within ITSM to allow for a better level of service to whoever that end customer is and to change the role of the ITSM professional to work on higher-level tasks instead of just ticket reduction,” says Cynthia Stoddard, senior vice president and CIO at Adobe.

The company’s intelligence-enabled ITSM makeover has helped Adobe not only support customer-facing digital media services but also improve productivity and efficiency inside the organization. Thanks to AI, ML and NLP, Adobe has improved ITSM processes, reduced errors and streamlined service management while also eliminating mundane and repetitive tasks for IT workers.

Here’s an inside look at Adobe’s shift to intelligent ITSM.

To continue reading this article register now

This content was originally published here.

Artificial intelligence development services main – APRO Software

At Apro, we have a team of expert AI developers ready to handle your projects.
Using our OpenX method, they’re able to simplify the AI development process.

Companies trust our AI development services because we do things differently, and have created a superior method that improves communication, tracking, and delivery.

All of our AI development services are delivered by top developers and managers with decades of shared experience delivering projects successfully.

We’re also a small team, which enables us to go above and beyond in the quality of service and support we provide.

This content was originally published here.

Can The “MARK” Be Reversed? What YOU Need To Know…Dr. Michael Lake and David Heavener – Kingdom Intelligence Briefing

David Heavener and Dr. Michael Lake discuss blockchain, artificial intelligence, the hive mind, and the mark of the beast system. It is closer than you think!

This content was originally published here.

Artificial Intelligence and Bad Data

Facebook, Google, and twitter lawyers gave testimony to congress on how they missed the Russian influence campaign. Even though the ads were bought in Russian currency on platforms chalk full of analytics engines, the problematic nature of the influence campaign went undetected. “Rubles + US politics” did not trigger an alert , because the nature of off-the-shelf deep learning is that it only looks for what it knows to look for, and on a deeper level, it is learning from really messy (unstructured) or corrupted and biased data. Understanding the unstructured nature of public data (mixed with private data) is improving by leaps and bounds every day. That’s one of the main things I work on. Let’s focus instead on the data quality problem.
Here are a few of the many common data quality problems:

  • Data sparsity: We know a bit of the picture about a lot of things, but no clear picture on most things.
  • Data corruption: Convert a PDF to text and print it. Yeah. Lots of garbage comes out besides the text.
  • Lots of irrelevant data: In a chess game, we can prune whole sections of the tree search, and more generally, in a picture of a cat, most of the pixels don’t tell us how cute the cat is. In totally random data, we humans (and AI) can see patterns where there really is none.
  • Learning from bad labeling: Bias of the labeling system, possibly due to human bias.
  • Missing unexpected patterns: Black swans, regime change, class imbalance, etc.
  • Learning wrong patterns: Correlation that is not really causation can be trained into an AI, which then assumes wrongly that the correlation is causative.
  • I could go on.

We know that labelled data is really hard to come by for basically any problem, and even labelled data can be full of bias. I visited a prospective client on Friday that had a great data team but no ability to collect the data they needed from the real world because of ownership and IP issues. This “Rubles + US politics” example of good data that is missed by AI is not surprising to experts. Why? Well, AI needs to know what to look for, and the social media giants were looking for more aggressive types of attacks like monitoring soldier’s movements based on their facebook profiles. Indeed, the reason we miss signals from good data in general is the huge amount of BAD data in real systems like twitter. This is a signal to noise ratio problem. If there are too many alerts, the alert system is ignored. Too few, and the system misses critical alerts. It is not only adversaries like the Russians trying to gain influence. The good guys, companies and brands, do the same thing. Drip campaigns and guerrilla marketing are just as much a tactic for spreading influence in shoe sales as in political meddling in an election. So, the real reason we miss signals from good data is bad data. Using simple predicate logic, we know that False assumptions can imply anything ( also this ). So learning from data we know is error-riddled carries some real baggage.
One example of bad data is finding that your AI model was trained on the wrong type of data. The text from chat conversation is not like text from a newspaper. Both are composed of text, but their content is very different. AI trained on the Wikipedia dataset or Google News articles will not correctly understand (i.e. “model”) the free-form text we humans use to communicate in chat applications. Here is a slightly better dataset for that, and maybe the comments from the hackernews dataset too. Often we need to use the right pre-trained model or off the shelf dataset for the right problem, and then do some transfer learning to improve from the baseline. However, this assumes we can use the data at all. many public datasets have even bigger bad data problems that cause the model to simply fail. Sometimes a field is used and sometimes it is left blank (sparsity), Sometimes non-numeric data creeps into numerical columns (“one” vs 1). I found an outlier in a large private real estate dataset where one entry among a million was a huge number entered by a human as a fat finger error .
Problems like the game of go ( AlphaGo zero ) has no bad data to analyze. Instead the AI evaluates more relevant and less relevant data. Games are a nice constrained problem set, but in most real world data, there is bias . Lots of it. Boosting and other techniques can be helpful too. The truth is that some aspects of machine learning are still open problems, and shocking improvements happen all the time. Example: Capsule network beats CNN .
It is important to know when error is caused by bad things in the data rather than caused by improperly fitting to the data. And live systems that learn while they operate, like humans do, are particularly susceptible to learning wrong information from bad data. This is kind of like Simpson’s paradox , in that the data is usually right, and so fitting the data is a good thing, but sometimes fitting to the data produces paradoxes because the method itself (fitting to the data) is based on a bad assumption that all data is ground truth data. See this video for more on Simpson’s paradox fun. And here is another link to Autodesk’s datasaurus , which I just love. It is totally worth reading in full.
We talked about the fact that most real-world data is full of corruption and bias. That kind of sucks, but not all is lost. There are a variety of techniques for combating bad data quality, not the least of which are collecting more data, and cleaning up the data. More advanced techniques like ensembles with NLP, knowledge graphs and commercial-grade analytics are not easy to get your hands on. More on this in future articles.
If you enjoyed this article on bad data and artificial intelligence, then please try out the clap tool . Tap that. Follow us on medium. Share on Facebook and twitter. Go for it. I’m also happy to hear your feedback in the comments. What do you think?
Happy Coding!
-Daniel daniel@lemay.ai ← Say hi. Lemay.ai 1(855)LEMAY-AI
Other articles you may enjoy:

  • How to Price an AI Project
  • How to Hire an AI Consultant
  • Artificial Intelligence: Get your users to label your data

This content was originally published here.