Coronavirus update: Vatican virus patient attended international conference | 7NEWS.com.au

The Vatican says a patient in its health services has tested positive for coronavirus, the first in the tiny city-state surrounded by Rome.

A Vatican source said the patient had participated in an international conference hosted by the Pontifical Academy of Life last week in a packed theatre several blocks from the Vatican.

In the video above: How to protect yourself from the coronavirus

Participants at the three-day conference on Artificial Intelligence included top executives of US tech giants Microsoft and IBM.

The academy issued a separate statement saying it was informing all other participants of the development by email but did not say it was the same person whose case was announced earlier by Vatican spokesman Matteo Bruni.

The discovery worsened the prospects of the virus having already spread further in the capital of Italy, since most Vatican employees live in Rome and those who live in the Vatican frequently enter and leave the city-state.

Bruni said the case was diagnosed on Thursday and that services in its clinics had been suspended to sanitise the areas.

Most Vatican employees who use its health services live in Italy on the other side of the border with the 108-acre city-state.

Bruni gave no details on whether the person who tested positive was such an employee or among the relatively few clergy or guards who live inside its walls.

Your cookie settings are preventing this third party content from displaying.

If you’d like to view this content, please adjust your .

To find out more about how we use cookies, please see our Cookie Guide.

This content was originally published here.

Media get peek at Tokyo’s new high-tech Takanawa Gateway Station | The Japan Times

The Yamanote Line’s first new station since 1971 was unveiled to the media on Monday, with East Japan Railway Co. showcasing robots and other “futuristic” features to help people find their way around.

Takanawa Gateway Station, situated in Minato Ward’s Konan district between Shinagawa and Tamachi stations, will officially open to the public on Saturday. It is the 30th station on Tokyo’s heavily used loop line and the first since Nishi-Nippori Station back in 1971.

The station will also serve the JR Keihin Tohoku Line running from Saitama Prefecture to Kawasaki and Yokohama.

“We aim to function as a gateway connecting Tokyo and the world at an area that has good traffic accessibility,” said Mie Miwa, a JR East official involved in the project. “I hope the station will be loved by people for a long time.”

The station will use robots programmed with artificial intelligence, including some tasked with guiding people as they change trains or look for nearby attractions and facilities.

It will also have an unmanned shop where people can buy goods that are scanned and checked out by camera-equipped devices that can recognize the items being sold.

JR East expects some 23,000 people to use the station daily at first, with the figure growing to 123,000 by 2024, when the station is scheduled to start operating as a transportation and business hub together with new high-rise office buildings around it.

This content was originally published here.

Microsoft and Sony partner for game streaming and other technologies | Windows Central

Microsoft and Sony will also collaborate on semiconductors and artificial intelligence (AI). When it comes to semiconductors, both parties will focus on image sensors. The statement went on to say that by “integrating Sony’s cutting-edge image sensors with Microsoft’s Azure… technology… the companies aim to provide enhanced capabilities for enterprise customers.” When it comes to AI, the two want to focus on user-friendly experiences which can help customers in their day-to-day lives.

It’s clear that the focus is definitely on gaming here. It’ll be interesting to see what this collaboration produces in the future. This is an exciting time to be a gamer! Who would’ve thought Microsoft and Sony would come together in such a meaningful way?

What are your thoughts on this announcement? What do you hope to see from the partnership? Let us know.

This content was originally published here.

Blockchain Project for National Archives Reports Successful Trial for Audio-Visual Content

A blockchain project developed to safeguard the integrity and accessibility of digital government records of national archives worldwide will soon present the results of a successful trial deployment in the United Kingdom, Estonia and Norway. The news was revealed in an official press release published on May 29.

The project, named ARCHANGEL, involves the U.K. National Archives, the University of Surrey and the U.K. Open Data Institute, with funding from the The Engineering and Physical Sciences Research Council (EPSRC). Its trial deployment in the U.K., Estonia and Norway focused on leveraging blockchain and other technologies to tackle the long-term future of digital video archives.

In an academic paper to be presented at the CVPR 2019 conference in Los Angeles in mid-June, researchers from Surrey University’s Centre for Vision, Speech and Signal Processing (CVSSP) outlined their successes in developing a tamper-sensitive and future-accessible architecture for archiving audio-visual content. The system was then secured using a proof-of-authority blockchain that is distributed across multiple independent archives.

In a statement for the press release, University of Surrey professor John Collomosse and ARCHANGEL principal investigator said that it is becoming increasingly critical that institutions are able to vouchsafe the provenance and integrity of archival materials to the public in a transparent manner, considering the vast volume of digital content accumulating in archives worldwide. He added:

“By combining blockchain and artificial intelligence technologies, we have shown that it is possible to safeguard the integrity of archival data in the digital age. It essentially provides a digital fingerprint for archives, making it possible to verify their authenticity.”

The press release notes that ARCHANGEL forms part of the Surrey blockchain testbed, which reportedly includes over £3.5 million ($4.4 million) of UKRI (UK Research and Innovation) and EU funded projects.

Professor Adrian Hilton, director of CVSSP, noted in his statement that the ambitious project represents a “great opportunity for the UK to lead internationally in application of distributed ledger technology to secure personal and national data archives.”

The press release further includes the national government archives of the United States and Australia as two places where the ARCHANGEL project has been trialled thus far.

As Cointelegraph reported last year, ARCHANGEL participants have previously stated their aim as being the “promise that no individual institution could attempt to rewrite history.”

This content was originally published here.

Goldspot partners with Pacton on AI exploration at Red Lake | MINING.com

Goldspot Discoveries (TSXV: SPOT) announced Friday a signed service agreement with Pacton Gold (CVE: PAC) to use Goldspot’s A.I. and machine learning tools to evaluate and identify possible mineral and drill targets on Pacton’s Red Lake, Ontario property.  “We believe Red Lake’s ground is ripe for a technological revolution” — Goldspot CEO

Goldspot has been granted a 0.5smelter royalty on the property and the option to purchase an additional 0.5% net smelter return royalty on all metals produced from the Red Lake property for C$1 million, as well as 0.5% net smelter return royalty on all metals produced from all the current claims comprising Pacton’s Australia assets in the Pilbara Craton for C$1 million.  

 “The Pacton Gold property in the historic Red Lake gold camp in North western Ontario excites us. It is the ideal district to use artificial intelligence and machine learning to find new discoveries,” said Denis Laviolette, GoldSpot’s president and CEO in a media statement. “After initial screening and utilizing artificial intelligence to analyze various layers of data related to Pacton Gold’s property, we have made our largest speculative bet to date.” 

“We believe Red Lake’s ground is ripe for a technological revolution, and this deal gives us royalty exposure to 16,630 hectares of prospective land,” said Laviolette. 

Market reaction to the partnership was positive: Goldspot’s stock was up 4%, and Pacton’s stock was up 8% on the CVE Friday afternoon.  

This content was originally published here.

Google I/O 2019: Watch Live Video of the Keynote Right Here | WIRED

Do you hear that? It’s the sound of Google executives practicing their lines ahead of Google I/O. The company’s annual developer conference in Mountain View, California, kicks off this Tuesday. The three-day event gives Google a chance to show off its latest work and set the tone for the year to come.

Can’t make it to the Shoreline Amphitheater? You can watch the entire keynote on the event page or on the Google Developers YouTube channel. It begins at 10 am PT (1 pm ET) on May 7 and should last for about 90 minutes. We’ll liveblog the whole thing here on wired.com.

Google I/O is technically a developer’s conference, and there should be plenty of talk about all the fun things developers can build using Google’s latest tools. But it’s also an opportunity to get consumers excited about what’s cooking in Mountain View. Last year, the company used the conference to debut its “digital wellness” initiative and a suite of new visual search tools for Google Lens. It also introduced Duplex, the eerily realistic AI assistant that can make dinner reservations and schedule haircuts like a human would.

This year, expect a parade of Google executives to talk about privacy, artificial intelligence, augmented reality, and more. We’ll likely see the latest version of Android software, and if we’re lucky, maybe even some new hardware.

More Great WIRED Stories

This content was originally published here.

Adobe Unveils AI Tool That Can Detect Photoshopped Faces | Technology News

Adobe, along with researchers from the University of California, Berkeley, have trained artificial intelligence (AI) to detect facial manipulation in images edited using the Photoshop software.

At a time when deepfake visual content is getting commoner and more deceptive, the decision is also intended to make image forensics understandable for everyone.

“This new research is part of a broader effort across Adobe to better detect image, video, audio and document manipulations,” the company wrote in a blog post on Friday.

As part of the programme, the team trained a convolutional neural network (CNN) to spot changes in images made with Photoshop’s “Face Away Liquify” feature, which was intentionally designed to change facial features like eyes and mouth.

On testing, it was found that while human eyes were able to judge the altered face 53 percent of the time, the the trained neural network tool achieved results as high as 99 percent.

The tool also identified specific areas and methods of facial warping.

Adobe’s execution in detecting facial manipulation came just days after doctored videos of Facebook CEO Mark Zuckerberg and US Speaker Nancy Pelosi made the rounds on social media as well as news channels.

“This is an important step in being able to detect certain types of image editing, and the undo capability works surprisingly well. Beyond technologies like this, the best defence will be a sophisticated public who know that content can be manipulated, often to delight them, but sometimes to mislead them as well,” said Gavin Miller, Head of Research, Adobe.

Adobe’s Photoshop software was originally released in 1990.

This content was originally published here.

Manheim Township senior Zach Johnson battling cancer amid final season on volleyball court | Local News | lancasteronline.com

As a senior and returning starter on an otherwise inexperienced Manheim Township boys volleyball team, Zach Johnson has been a bright spot.

This content was originally published here.

Sci-Fi Short Film Watch Room Looks at Human, AI Relationships

Three genius friends build an artificial intelligence that seems to have a mind of its own. As we see in the new sci-fi short film Watch Room, the program they created is much more in control than they realize.

Dust has released a new sci-fi short Watch Room, written by Michael Koehler and directed by Noah Wagner. Three friends—Nate, Chloe, and Bernard—are working out of their garage to perfect an artificial intelligence through virtual reality. The AI, which they’ve named Kate, has been tasked with talking a suicidal man off of a ledge, but she keeps failing. It might not be an accident.

This content was originally published here.

A.I. is Getting Freakishly Good at Generating Fake Humans | Digital Trends

A.I. is getting pretty scarily good at lying to us. No, we’re not talking about wilfully misleading people for nefarious means, but rather creating sounds and images that appear real, but don’t exist in the real world.

In the past, we’ve covered artificial intelligence that’s able to create terrifyingly real-looking “deep fakes” in the form of faces, synthetic voices and even, err, Airbnb listings. Now, researchers from Japan are going one step further by creating photorealistic, high-res videos of people — complete with clothing — who have only ever existed in the fevered imagination of a neural network. The company responsible for this jaw-dropping tech demo is DataGrid, a startup based on the campus of Japan’s Kyoto University. As the video up top shows, the A.I. algorithm can dream up an endless parade of realistic-looking humans who constantly shapeshift from one form to another, courtesy of some dazzling morphing effects.

Like many generative artificial intelligence tools (including the A.I. artwork which sold for big bucks at a Christie’s auction last year), this latest demonstration was created using something called a Generative Adversarial Network (GAN). A GAN pits two artificial neural networks against one another. In this case, one network generates new images, while the other attempts to work out which images are computer-generated and which are not. Over time, the generative adversarial process allows the “generator” network to become sufficiently good at creating images that they can successfully fool the “discriminator” every time.

As can be seen from the video, the results are impressive. They don’t appear to have any of the image artifacts or strange glitches which have marked out many attempts at generating images. However, it’s also likely not a coincidence that the video shows humans positioned against simple white backdrops, thereby minimizing the potential of confusing backgrounds which could affect the images created.

Provided all is as it seems, this is a fascinating (albeit more than a little disconcerting) advance. If we were employed as movie extras or catalog models for clothing brands, we’d probably be feeling a little bit nervous right now. At the very least, the possibility of next-level fake news just got a whole lot better.

This content was originally published here.