A Guy Invented a Bot that Writes Songs Like Luis Alberto Spinetta

You don’t have to be a Black Mirror fan to acknowledge that technology is rapidly blurring the boundaries of what we thought to be impossible just a few years back. And the arts seem to be in the middle of all the changes taking place throughout society. Filmmakers are casting robots, dead musicians are touring as holograms, and now it seems that Artificial Intelligence may be tackling one of the most sacred pillars of Argentine rock royalty.

An informatics engineer from the University of Buenos Aires (UBA) has actually developed technology in which a bot generates song lyrics that imitate the patterns used by Luis Alberto Spinetta. Freaked out yet? Well, let me explain a little more, to the best of my abilities. But first, let’s listen to El Flaco while we read, shall we?

So what exactly is this all about? Well, this bot is the brainchild of Alex Ingberg, an Argentine that currently resides in Tel Aviv and who published his findings in a complicated, dummy un-friendly post on Medium, which I will now try my absolute best to try and translate to you (wish me luck). It begins, simply enough, with the bot being presented with all of Spinetta’s lyrics since his first album Almendra (1969), to his posthumous work Los Amigo released in 2015. It took into account all songs in which he had at least a partial credit in composing. So far so good, all is understood. But this is when it all gets tricky.

Ingberg created two different algorithms. The first, created from a process called Markov chain, which didn’t yield very original results, as the mechanism works based on probabilities. In Ingberg’s own words, the result was “copy-paste of [Spinetta’s] work, a collage among different lyrics to create a new song,” and it even dished out several phrases that were identical to those written by El Flaco himself. But the second algorithm is where the magic really happened. In it, Ingber worked with something called Recurrent Neural Network (RNN) which recognizes patterns in data streams and also adds the temporal dimension. Which means it kinda, sorta remembers… I know what you’re thinking:

After this simple introduction, comes a very complicated explanation that uses terms like LSTM, Python 3, and something called Greg Surma’s text predictor (again, if you wanna go deep into this go to Mr. Inberg’s original post). Suffice it to say that after 50,000 iterations he was able to obtain some real, palpable results. In his words again: “The interesting thing about this is not just seeing the final result, but the learning process that the algorithm was having in every iteration. From a concoction of characters, it goes on to create formed verses in only a few cycles.”

The results are, nevertheless very difficult to evaluate by an untrained eye. They’re full of grammatical and even spelling errors, and in some instances they just don’t make any sense at all. But for Ingberg, the findings couldn’t be more of a success. “Although the lyrics generated by artificial intelligence have these little flaws, we can definitely see that the model correctly learned to copy the style of the provided dataset,” he says. In other words, the results are quite extraordinary, considering that the network never received any data from a dictionary or a grammar book. Just El Flaco’s words. No news yet about any Spinetta hologram being developed for next year’s Lollapalooza, though.

Pedro is the Lifestyle editor at The Bubble. Over the years he’s written about porn directors, viral social media darlings and once about a guy that built sex machines in Ramos Mejía. He enjoys writing chronicles, interviews and those top 5 lists that seem to be so popular with the cool kids nowadays.

This content was originally published here.

Anyone Can Become a Professional Dancer, Thanks to New Artificial Intelligence Technology

TechEBlog, The Latest Tech and Gadget News

This content was originally published here.

“Survival of the Richest” Depends on Them Saving Us All

Have you heard about “the event”?
Turns out “the event” is a codename for the systemic crisis approaching humanity in giant strides, a euphemism used by the world’s ultra-rich. Whether it will be triggered by an economic meltdown and massive social unrest, climate change, nuclear explosion, an epidemic, or cyber warfare, it will result in global catastrophe, and the upper echelon wants to be ready for “Judgment Day.”
To prepare for the inevitable, five tycoons invited media theorist and technology author prof. Douglas Rushkoff to give them some advice for the symbolic fee of half his annual income.
During the meeting, it suddenly dawned on Rushkoff that the super-rich had a completely different reason for investing in technology: all the newly developing cutting edge technologies, such as artificial intelligence, blockchain, 3D printing, CRISPR and even the colonization of Mars, are seen and supported by the world’s elites as means to save themselves from “the event.”
While we see supermarkets without cashiers, autonomous vehicles and robots doing the work for us — the super-wealthy see means of protection against angry mobs and systemic breakdowns in the not-too-distant future.
Indeed, the wealthy and powerful few have a much broader view of global risk than most of human society. And yet, what they can see and predict is only a fragment of the overall picture. The multi-faceted human crisis originates from a completely different cause and happens for an entirely different purpose to what they understand, which is why they think they can evade it.
The actual “event” is a natural turning point in human evolution, and its origins begin no less than approximately 14 billion years ago with the Big Bang.
Through billions of years, the development of matter created gases, dust, stars and planets, and then the biological life of flora and fauna on Earth. But along with the expansion of the universe, nature also works to bring all levels of life into balance: from the inanimate, to plant life, to animal life, to human life. And our turn has just arrived.
Whether we realize it or not, nature is pushing us to come to balance with it. And that means becoming an integral, harmonious part of the natural system, which necessitates the evolution of human society as a collective species around the planet. Gradually, nature is amplifying our sensitivity to our global interdependence, forcing us to recognize the human network that we are all part of, and transform our societies accordingly.
No technology can stop the laws of nature and no bunker can keep anyone unaffected by them. But, we can learn how to go along with the evolutionary pressure, rather than against it.
To avoid becoming victims of an aggressive breakdown of our current culture, we have to acknowledge and prepare ourselves for our inevitably connected future. People have to learn about the laws of nature and how they form an integral system, where every element depends on its balanced connection with the others and how it complements them.
But that’s the just the beginning of humanity’s transformation. It’s no coincidence that in the last few decades there is a growing body of research from multiple fields confirming that positive human connections make us smarter and better in every sense, as well as happier and healthier. Human beings will have to discover and activate their inherent wiring for connection by consciously practicing it.
The more we practice our positive connections — personally, socially and globally — the more we see that we are coming to balance with the laws of nature, and that will become our new source of fulfillment.
Surely, the transformation of human society will require a massive socio-educational endeavor around the planet, using our media and our technologies in a new way and for a new purpose. The irony is that precisely the people like those who met Rushkoff have all the means necessary to make this happen. What they lack is only the understanding that the only way to save themselves from “the event” is to save human society as well.

This content was originally published here.

How advanced industrial companies should approach artificial-intelligence strategy

Leaders need to determine what AI can do for their company by looking at potential applications and scenarios and then building their approach around those findings.

This content was originally published here.

IBM Supercomputer Helps Identify 77 Drugs That Could Treat The Coronavirus | Ubergizmo

However, thanks to the work of researchers, more potential treatment options have been discovered. The researchers harnessed the power of IBM’s Summit supercomputer in which the computer has managed to identify 77 drugs that show promise in terms of treating and fighting against the coronavirus.

Starting with more than 8,000 compounds, the use of Summit allowed researchers to quickly identify which drugs are more effective than others in a short period of time. How they identified the potential drugs is by looking at drugs that would be effective at preventing the coronavirus from binding to cells by using a protein spike.

The identified drugs, at least in theory, would instead bind to the protein and prevent the virus from wreaking havoc on the human body. Jeremy Smith, co-author of the research, warns that this does not mean that they have found a cure yet.

According to Smith, “Our results don’t mean that we have found a cure or treatment for the coronavirus. We are very hopeful, though, that our computational findings will both inform future studies and provide a framework that experimentalists will use to further investigate these compounds. Only then will we know whether any of them exhibit the characteristics needed to mitigate this virus.”

Filed in Computers >Medical. Read more about , and . Source: bgr

This content was originally published here.

Robotics Business Review Names the RBR50 for 2019 – Robotics Business Review


May 14, 2019      

FRAMINGHAM, Mass. — Robotics Business Review is proud to announce the 2019 RBR50 list of leading automation companies. The RBR50 is the annual list of the 50 most innovative and transformative robotics companies that have achieved commercial success in the past year. This year, Robotics Business Review has teamed up with IDC to choose this year’s winners.

“For the past eight years, the RBR50 has been the premier listing of the most innovative and commercially successful robotics companies around the world,” said Editor-in-Chief Keith Shaw. “The list continually evolves to show how the commercial and industrial robotics landscape is changing, giving readers a sense of who is leading the charge in robotics, and where the industry is heading.”

For 2019, companies were selected from commercial and industrial robotics categories, including component makers, collaborative robotics, autonomous mobile robots, artificial intelligence, healthcare and service robotics, and more.

All of the 2019 RBR50 companies will be listed in an exclusive database available to RBR Insiders. A dedicated webcast later this year will also discuss this year’s RBR50 selections.

To see who was listed in the 2019 RBR50 report, click here to download the free report.

ABOUT ROBOTICS BUSINESS REVIEW: Robotics Business Review provides actionable business intelligence for the global robotics industry. Members enjoy exclusive insights into global news, tracking of financial transactions, analysis of new technologies and companies, annual and quarterly research reports, access to the RBR50 list of leading robotics companies, and much more. Visit RoboticsBusinessReview.com.

ABOUT EH MEDIA: EH Media is an integrated media company and the leading provider of independent business and consumer content and information serving the consumer, commercial & custom electronics, security, information technology, house of worship, pro audio, robotics, and supply chain markets through multimedia publications, websites, newsletters, and expos. EH Media provides resources to millions of professionals and consumers worldwide. Visit www.ehmedia.com.

Please create an account to continue reading

Thank you for enjoying Robotics Business Review. You’ve reached your monthly article limit. To continue reading, please become a free limited member.

Already a Member?
Login

This content was originally published here.

Cousins of Artificial Intelligence

Artificial Intelligence is a broader umbrella under which Machine Learning (ML) and Deep Learning (DL) comes. Diagram shows, ML is subset of AI and DL is subset of ML.
Artificial Intelligence
“The study of the modelling of human mental functions by computer programs.” — Collins Dictionary
AI is composed of 2 words Artificial and intelligence. Anything which is not natural and created by humans is artificial. Intelligence means ability to understand, reason, plan etc. So we can say that any code, tech or algorithm that enable machine to mimic, develop or demonstrate the human cognition or behavior is AI.
The concept of AI is very old but it got popularity recently. But why?
The reason being earlier we had a very small amount of data to make accurate predictions. But today, there is tremendous increase in the size of the data which is generated every minute and help us to make more accurate predictions. Along with the enormous amount of data, we also have the support of more advanced algorithms, high end computing power and storage that can deal with that huge data size. Examples include Tesla self-driving car, Apple’s Siri and many more.
Machine Learning
We have seen what AI is but what were the issues which lead to the introduction of machine learning?
Few reasons were:
In field of Statistics the problem was “How to efficiently train large complex models?” and in Computer Science & AI, the problem was “How to train more robust versions of AI systems?
So because these issues machine learning was introduced.
What is machine learning?
“Machine learning is the sc ience of getting computers to act without being explicitly programmed.” — Stanford University
It’s a subset of AI which uses statistical methods to enable machines to improve with experience. It enables a computer to act and take data driven decisions to carry out a certain task. These programs or algorithms are designed in such a way that they can learn and improve over time when exposed to new data.
Example:
Suppose we want to create a system that tells us the expected weight of person based on its height. Firstly, we will collect the data. This is how data looks like (picture below). Each point on graph represents a data point.
To start with, we will draw a simple line to predict weight based on height.
A simple line could be W=H-100
Where
W=Weight in kgs
H=Height in cms
This line can help us to make prediction. Our main goal is to reduce distance between estimated value and actual value. i.e the error. In order to achieve this, will draw a straight line which fits through all the points.
Our main goal is to minimize the error and make them as small as possible. Decreasing the error between actual and estimated value improves the performance of model and also the more data points we collect the better our model will become.
So when we feed new data that is height of a person then it could easily tell us the weight of the person.
Deep Learning
“Deep Learning is a subfield of machine learning concerned with algorithms inspired by the structure and function of the brain called artificial neural networks”. — Machine Learning Mastery
It’s a particular kind of machine learning that is inspired by the functionality of our brain cells called neurons which lead to the concept of artificial neural network(ANN). ANN is modeled using layers of artificial neurons or computational units to receive input and apply an activation function along with threshold.
In simple model the first layer is input layer, followed by a hidden layer, and lastly by an output layer. Each layer contains one or more neurons.
Simple Example to understand how things happen at conceptual level
How you recognize square from other shapes?
First thing we do is check whether the figure has four lines. If yes, we further check if all are lines are connected and closed. If yes we finally check if all are perpendicular and all sides are equal.
We consider the figure as square if it satisfies all the conditions.
As we saw in the example it’s nothing but nested hierarchy of concepts. So we took a complex task of identifying a square and broken down into simpler tasks. Deep learning also does the same thing but at a larger scale.
For instance, A machine performs a task of identifying an animal. Task of the machine is to identify weather given image is of cat or dog.
If we would have asked us to solve this using concept machine learning then we would have defined features such as check if it has whiskers or not, check if tail is straight or curve and many other features. We will define all features and let our system identify which features are more important in classifying a particular animal. Now when it comes to deep learning it takes it to one step ahead. Deep learning automatically finds which features are most important for classifying as compared to machine learning where we had to manually give out the features.
So till now we understood AI is bigger picture and machine learning & deep learning are its subparts of it.
Machine learning (ML) vs Deep learning (DL)
Easiest way to understand difference between machine learning and deep learning is “DL IS ML”. More specifically it’s the next evolution of machine learning.
Let’s take few important parameters and compare machine learning with deep learning.

  1. Data Dependency

The most important difference between the two is the performance as the data size increases. From the graph below we can see that as the size of the data is small deep learning doesn’t performs that well but why?
This is because deep learning algorithm requires large amount of data to understand it perfectly. On the other hand machine learning works perfectly on smaller datasets.
2. Hardware Dependency
Deep learning algorithms are highly dependent on high end machines while machine learning algorithms can work on low end machines as well. This is because requirement of deep learning algorithms include GPU’s which is an integral part of its working. GPU’s are required as they perform large amount of matrix multiplication operations and these operations are only be efficiently optimized if they use GPU’s.
3. Feature engineering
It’s the process of putting domain knowledge to reduce the complexity of data and make patterns more visible to learning algorithms. This process it’s difficult and expensive in terms of time and expertise. In case of machine learning, most of the features are to need be identified by an expert and then hand coded as per the domain and data type. The performance of machine learning depends upon how accurately features are identified and extracted. But in deep learning it tries to learn high level features from the data and because of this it makes ahead of machine learning.
4. Problem Solving Approach
When we solve problem using machine learning, its recommended that break down the problem into sub parts first, solve them individually and then combine them to get the final result. On the other hand in deep learning it solves the problem end to end.
For instance,
The task is multiple object detection i.e what is the object and where it is present in the image.
So let’s see how this problem is tackled using machine learning and deep learning.
In a machine learning approach, we will divide problem in 2 parts. object detection and object recognition.
We will use an algorithm like bounding box detection as an example to scan through image and detect all objects then use object recognition algorithm to recognize relevant objects. When we combine results of both the algorithms we will get the final result that what is the object and where it is present in the image.
In deep learning it perform the process from end to end. We will pass an image to an algorithm and our algorithm will give out the location along with the name of the object.
5. Execution Time
Deep learning algorithms take a lot of time to train. This is because there are so many parameters in a deep learning algorithm that takes the training longer than usual. Whereas in machine learning the training time is relatively less as compared to deep learning.
Now the execution time is completely reverse when it comes to the testing of data. During testing deep learning algorithms takes very less time to run whereas the machine learning algorithms like KNN test time increases as the size of the data increases.
6. Interpretability
This is the main reason why people think a lot before using it in the industry. Suppose we use deep learning to give automated essay scoring. The Performance it gives is excellent and same as human beings but there are some issues that it doesn’t tell us why it has given that score, indeed mathematically it’s possible to find out which nodes of deep neural network were activated at that time but we don’t know what the neurons were supposed to model and what these layers were doing collectively. So we fail to interpret the result but in machine learning algorithms like decision tree gives us a crisp rule that why it chose what it chose so it is easy to interpret reasoning behind it.
I hope now you have clear idea about all three, what relationship they share and how they are different from each other.
Thanks for reading!

This content was originally published here.

“Artificial Intelligence Plan” That Became “Coronavirus Plan” & “China Did It” Narrative Is WMDs 2.0

Welcome to The Daily Wrap Up, a concise show dedicated to bringing you the most relevant independent news, as we see it, from the last 24 hours (4/21/19).

As always, take the information discussed in the video below and research it for yourself, and come to your own conclusions. Anyone telling you what the truth is, or claiming they have the answer, is likely leading you astray, for one reason or another. Stay Vigilant.

Bitcoin Donations Are Appreciated:
www.thelastamericanvagabond.com/bitcoin-donation
(3FSozj9gQ1UniHvEiRmkPnXzHSVMc68U9f)

Question Everything, Come To Your Own Conclusions.

This content was originally published here.

Enlarge Your Small JPGs Without Losing Quality

Enlarge Your Small JPGs Without Losing Quality

When working online, we often stumble on pictures that are too small to use, and when we enlarge them, we lose too much quality, making them blury or pixelated in the process.

Now, thanks to the power of artificial intelligence, we can now enlarge those pics without losing as much quality compared to the enlarge feature of most traditional photo editors, including photoshop, IrfanView, and many others.

Introducing ImgLarger, a service that will not enlarge your, *ahem*, attributes in real life, but will do wonders with your small pictures! Here’s an old photo I took at Montreal Comiccon that I deliberately scaled down.

Now, here’s the photo enlarged via Photoshop:

The photo enlarged via ImgLarger:

And here it is, side by side with the original:

It’s not perfect, but it’s much better! All you have to do to use the service is to create a free account, which will give you a limited number of uses, but if you only use it occasionally, it should be enough for most people!

This content was originally published here.

Agencies Asked for Examples of Reskilling Employees

OPM has asked agencies to provide examples of how they are working to reskill and upskill employees, reminding them that it is a priority in the President’s Management Agenda.

A memo said that OPM is “looking for your examples of best of breed; promising practices; and innovative approaches to concentrate on the reskilling and upskilling of our workforces. The President’s Management Agenda (PMA) noted that “the workforce for the 21st Century must enable senior leaders and front-line managers to align staff skills with evolving mission needs. This will require more nimble and agile management of the workforce, including reskilling and redeploying existing workers to keep pace with the current pace of change.”

It suggested that agencies “focus on the many talent development programs across federal government. If appropriate, we are looking at possible wholesale changes and a technological ecosystem approach to mitigate any challenges you have identified,” it said.

Reskilling employees in the wake of changes including expanded use of artificial intelligence and other innovations has been a running theme for the administration not only through the PMA but through annual budget proposals and other workforce initiatives.

This content was originally published here.