HPE to acquire supercomputer maker Cray for $1.3 billion | VentureBeat

(Reuters) – Supercomputer manufacturer Cray said on Friday it would be bought by Hewlett Packard Enterprise in a deal valued at about $1.3 billion (1 billion pounds), net of cash.

The $35 per share value represents a premium of 17.4% to Cray’s last close.

HPE said it expects the deal to increase its footprint in federal business and academia, and sell supercomputing products to its commercial clients.

The deal, expected to close by the first quarter of HPE’s fiscal year 2020, will add to its adjusted operating profit in the first full year after closing.

As part of the deal, HPE expects to incur one-time integration costs that will be absorbed within its fiscal year 2020 free cash flow outlook of $1.9 billion to $2.1 billion that remains unchanged.

Seattle-headquartered Cray has U.S.-based manufacturing operations and about 1,300 employees worldwide. It earned $456 million in revenue in its last fiscal year.

Cray’s supercomputing systems can handle massive data sets, converged modelling, simulation, artificial intelligence, and analytics workloads.

(Reporting by Arjun Panchadar in Bengaluru; Editing by Shounak Dasgupta)

This content was originally published here.

Simple Intro to Conditional GANs with TorchFusion and PyTorch

Humans are very good at recognizing things and also creating new things. For so long, we have worked on teaching computers to emulate human ability to recognize things but the ability to create new things eluded artificial intelligence systems for long. That was until 2014 when Ian Goodfellow invented Generative Adversarial Networks. In this post, we shall go through a basic overview of Generative Adversarial Networks and we shall use them to generate images of specific digits.

Overview of Generative Adversarial Networks

Imagine you are an artist trying to draw a very realistic picture of Obama that will fool a judge into thinking the picture is a real picture. The first time you do this, the judge easily detects your picture is fake, then you try again and again until the judge is fooled into thinking the picture is real. Generative Adversarial Networks works this way, it consists of two models,
A Generator that draws images and a Discriminator that attempts to distinguish between real images and the images drawn by the discriminator.
In a sense, both are competing with each other, the generator is trained to fool the discriminator while the discriminator is trained to properly tell apart which images are real and which are generated. In the end, the generator will become so perfect that the discriminator will not be able to tell apart between real and generated images.
Below are samples created by a GAN Generator.
GANs are of two general classes, Unconditional GANs that randomly generates any class of images and Conditional GANs that generates specific classes. In this tutorial, we shall be using the conditional gans as they allow us to specify what we want to generate.

Tools Setup

Tranining GANs is usually complicated, but thanks to Torchfusion, a research framework built on PyTorch, the process will be super simple and very straightforward.

Install Torchfusion via PyPi


pip3 install torchfusion

Install PyTorch

If you don’t have torchfusion already installed, head over to pytorch.org for the latest install binaries of PyTorch.
Now you are fully setup!
Next, import a couple of classes.
Define the generator network and the discriminator
In the above, we specify the resolution of the images to be generated as 1 x 32 x 32.
Setup the optimizers for both Generator and Discriminator models
Now we need to load a dataset which we shall try to draw samples from. In this case, we shall be using MNIST.
Below we create a Learner, torchfusion has various learners that are highly specialized for different purposes.
And now, we can call the train function to train the two models
By specifying the saveoutputsinterval to be 500, the learner will display sample generated outputs after every 500 batch iterations.
Here is the full code
After just 20 epochs of training, this generates the image below:
Now to the most exciting part, using your trained model, you can easily generate new images of specific digits.
In the code below, we generated a new image of digit 6, you can specify any digit between 0 — 9
Result:
Generative Adversarial Networks are an exciting field of research, torchfusion makes it very simple with well optimized implementations of the best GAN algorithms.
TorchFusion is developed and maintained by I and Moses Olafenwa, The AI Commons Team, as part of our efforts to democratize Artificial Intelligence and make it accessible to every single person and organization on the planet.
Official Repo of Torchfusion is https://github.com/johnolafenwa/TorchFusion
Tutorials and documentation for TorchFusion is available from https://torchfusion.readthedocs.io
You can always reach to me on twitter via @johnolafenwa

This content was originally published here.

This AI for Scene Scraping,creating Deep Fakes and More

Ever Since the birth of data and computation, it has been like a technology of knife – it can be useful or destructive.

Even though Artificial Intelligence is just math played on the dimensional space, sometimes the results look beyond the practical problems.

Rendering an image on the scene

But, here is a Deferred Neural Network, obviously neural networks takes on human brain working theory but the time to learn by human looks different and can never be down to machines.

This network works with 1000 images data and learns to mimic at the core level of things as normally creating a fake work on layers, some deep learning methods limited to certain iteration but this one works like charm.

The Novel View Point chooses the characteristic nature of pixels with the usage of it.

Rendering an image on the scene

This content was originally published here.

AI is poised to radically transform software development | CIO

AI is poised to radically transform software development

New tools and cutting-edge projects show how machine learning and advanced analytics may soon revolutionize how software is designed, tested, and deployed.

Email a friend

Your message has been sent.

We are entering the age of what Tesla AI director Andrej Karpathy calls “Software 2.0,” where neural networks write the code and people’s main jobs are defining the tasks, collecting the data, and building the user interfaces.

But not all tasks can be tackled by neural networks — at least, not yet — and traditional software development still has a role to play. Even there, however, artificial intelligence, machine learning, and advanced analytics are changing the way that software is designed, written, tested, and deployed.

Brazil-based TOTVS provides mission-critical industry software for about 100,000 enterprise customers. For example, trillions of dollars are transacted each day in its financial services solutions.

To continue reading this article register now

This content was originally published here.

Big Data Analytics Paving The Path For Businesses With More Informed Decisions

Driving Innovation
Datafloq is the one-stop source for big data, blockchain and artificial intelligence. We offer information, insights and opportunities to drive innovation with emerging technologies.

This content was originally published here.

AIL bags position amongst the top 4 teams at IIT Kharagpur – The Blue Pencil

A team comprising of Navneet Kaur Dhanjal and Vaibhav Latiyan, from the fifth year along with Kainat Singh from the third year, represented the Army Institute of Law in the National Moot Court competition organized by IIT Kharagpur Rajiv Gandhi School of Intellectual Property Law. It was a techno legal moot based on artificial intelligence…

This content was originally published here.

Livongo adds compatibility with Apple Watch, Fitbit, other smartwatches | MedTech Dive

Dive Brief:

Dive Insight:

The smartwatch integration is the latest effort by Livongo to offer personalized recommendations to its users addressing nutrition, exercise and sleep in an attempt to improve health outcomes.

The company says individuals will be able to share step data with Livongo through the smartwatch integration, adding another data point to its artificial intelligence system used to provide health insights.

“As we continue to expand our applied health signals platform, we can use the integration to aggregate more important health data that we can then interpret to better understand the unique needs of our members,” Livongo Chief Product Officer Amar Kendale said in a statement. 

Karl Poterack, Mayo Clinic’s medical director of applied clinical informatics, has cautioned step data has limited ability to offer predictive clinical applications. He warned there has been inadequate high-quality research at HIMSS 2019, and cautioned doctors are concerned having access to increasing amounts of data may expose them to new legal liability. 

“If you present this data and bring in your device and say ‘here I have this heart rate data from the last month’ we’re going to say, ‘that’s great, but we don’t know what it really means,'” Poterack said to a HIMSS panel eariler this year.

Livongo said interactive five day challenges delivered through the new notification features, such as walking daily or drinking water instead of soda, are designed to help individuals form better health habits. 

This content was originally published here.

ThisPersonDoesNotExist.com Uses Artificial Intelligence to Generate Fake Human Portraits – TechEBlog

ThisPersonDoesNotExist Fake Faces
Photo credit: The Verge
We have seen the future of artificial intelligence, and it can already generate fake human portraits indistinguishable from real ones. ThisPersonDoesNotExist.com was created by Uber software engineer Philip Wang and uses research by NVIDIA to generate fake portraits. This algorithm is trained on a dataset of real images and then uses a generative adversarial network (GAN), a type of neural network known, to fabricate these new examples. Read more for a slideshow of fake faces generated by the website.



“The underlying AI framework powering the site was originally invented by a researcher named Ian Goodfellow. Nvidia’s take on the algorithm, named StyleGAN, was made open source recently and has proven to be incredibly flexible. Although this version of the model is trained to generate human faces, it can, in theory, mimic any source. Researchers are already experimenting with other targets. including anime characters, fonts, and graffiti,” reports The Verge. Check it out here.

Photo credit: NVIDIA via Peta PixelIn the near future, advertising…

We have seen the future, and artificial intelligence takes over.…

This content was originally published here.

Computerworld HK bids adieu after 35 years | ComputerWorld Hong Kong

After three and a half decades of chronicling the ups and downs of Hong Kong’s IT market, Computerworld Hong Kong has been the trusted source of local news and insights into enterprise technology.

We started the journey when mainframes ruled the data centers, personal computers were just entering mainstream adoption and the internet was still a dozen years into the future.

Hong Kong’s IT industry has grown in leaps and bounds through major business and technology shifts, persevering through periods of missteps to become one of Asia’s digital hubs.

Computerworld Hong Kong steps away from the journey tomorrow as the industry is on the verge a massive transformation, with different elements such as the cloud, 5G, artificial intelligence, IoT are coming together to propel Hong Kong’s smart city development into the next level.

We have been privileged to accompany the industry through its evolution across 35 years. And we at the editorial team comprising of Nancy Ho and Dylan Bushell-Embling and myself thank you for your steadfast support over the years.

We wish the industry well and we look forward to its continued growth.

Until we meet again.

Gigi Onag

Deputy Editor, Computerworld Hong Kong

This content was originally published here.

The Guardian publishes first-ever op-ed written entirely by artificial intelligence

The Guardian on Tuesday published its first-ever op-ed written entirely by artificial intelligence.

Why it matters: It’s the latest in a series of developments in the past few years that showcase ways artificial intelligence is being experimented with to replace functions of journalism, but not the industry itself.

Details: The outlet fed a prompt to GPT-3, OpenAI’s powerful new language generator, and asked the machine to write an essay for from scratch. The prompt asks the machine to write an op-ed convincing readers that robots come in peace. Here’s some of what it came up with:

The big picture: There’s been many conversations over the past few years about whether journalists and editors could one-day be replaced by machines.

Our thought bubble: AI isn’t replacing journalism, but like every other industry, it’s upending it and shaping it in new ways.

This content was originally published here.