Featured – PAF Center of Artificial Intelligence and Computing | Pakistan Defence

If it’s part of the AZM family, then I believe that we are very late here.

AI and Machine Learning are around for more than a decade now and that too in commercial use.

But still its good to see that the need of AI has been realized and a practical step has been taken.

Jointly between militaries and other institutions. Like USAF as an example. The Air Force Material Command works jointly with various defence contractors. You’d see civilians and military personnel flying F-16s, F-35s, F-15s , F-22s. and numerous other projects. Still an excellent step for PAF.

This content was originally published here.

Top 11 Best Features Of The Samsung Galaxy S10, S10 Plus & S10E – Correct Blogger | Nothing Goes Wrong

What’s up guys, so after months of leaks and rumors, the Galaxy S10 devices are finally here, and for sure and as you’d already expected, they come with some of the class leading specs and features that you’re not gonna find on other similar and closely related devices, at least for now.
So don’t y’all think it’d be pretty nice if we decide to take a look at some of the best features that makes the new Galaxy S10 smart phones what they are today? sounds pretty nice right? Alright here we go..
Top 11 Features Of Samsung Galaxy S10 Devices
1. First Dynamic OLED Display on a Smart Phone Now it is true that for some time now, Samsung displays have proven to be the best in the market, and with these new Galaxy S10 devices, they have even taken that record up a notch once more.
Now Samsung has been known for making use of ” SuperAMOLED ” displays on all of their smart phones, but if you’d take some minutes of your time and go through the Galaxy S10 specs on GSMArena.com , you’ll notice that those terms have been replaced with ” Dynamic OLED ” instead of the regular SuperAMOLED we’re used to seeing, making the Galaxy S10 the world’s first Dynamic OLED display smart phones, and also the first HDR10+ certified smart phones, which simply means better colour accuracy, peak brightness and contrast ratio.
Not only that, but the display also goes as far as reducing blue light by 42%, thereby reducing the damage that our smart phones do to our eyes. Honestly, the display on the S10 models look distinctly futuristic, and elegantly minimal at the same time.
2. AI App Predictions Samsung has built an artificial intelligence into the S10, and you’re gonna love this new feature, because it’s gonna have a big impact on the smart phone’s overall user experience. It learns certain ways and patterns in which you make use of your phone over some time, and it adapts and adjust its behavior accordingly.
For example, it will learn what apps you make use of at certain times of the day, and pre-load those apps in the phone’s background, so that they’ll load faster whenever you open them.. pretty cool feature right.
3. Fast Wireless Charging 2.0 The Galaxy S10 supports fast wireless charging 2.0, which means you can charge your device as quickly with a wireless charger, just the same way you will do with a wired charger. This might not really seem like a big feature or a big upgrade to a lot of folks out there, because why not, not quite a lot of people have come to appreciate and actually make use of wireless charging as their major source of charging their devices, but anyways, it’s a good thing that it’s still there.. Right?
4. Wireless Powershare (A.k.a Reverse Wireless Charging) Now you remember the Reverse Wireless Charging thing that we saw on Huawei’s Mate 20 Pro smart phone last year right? Well it is making a come back on the this year’s recent Samsung Galaxy S10 devices, and this time, it is even better. Because, instead of the meagre 2.5W wireless charging speed that came with the Mate 20’s Pro, Samsung doubled the numbers and made theirs 9W.
So this means you can simply wirelessly charge other wireless charging capable devices at a faster rate, than you would do with the Huawei’s Mate 20 Pro.
5. Ultrasonic In-display Fingerprint Reader Now for one thing, the Samsung Galaxy S10 has a fingerprint scanner, but not just your random or regular fingerprint scanner this time that you’re all used to seeing at the back of every smart phone these days. Instead, Samsung opted for the new and upcoming In-display fingerprint scanners, and for another thing, this is at the same time, not just the regular Optical In-display fingerprint scanner that is slowly becoming rampant on almost every new android smart phones these days. Instead they went for the Ultrasonic In-display 3D fingerprint scanner.
And what this simply means is that, instead of the scanner to just take a photo of your fingertips and compare it to an already stored photo in its database just like all other optical fingerprint scanners would do, this one makes use of sound waves from your fingers to unlock the phone, which makes it more secured, and very hard to get fooled. Pretty cool feature right.?
UPDATE – A lot of fans where bashing Samsung and its new Ultrasonic fingerprint scanner, complaining that it was way too slower than other In-display optical fingerprint scanners, but those claims were rendered useless a couple of days ago, as a video posted by Ice Universe on twitter shows the best way to use the Samsung’s Galaxy S10 fingerprint scanner, by just placing the tip of your finger on it and removing it in split seconds, instead of just pressing and holding it down there. You can watch the video below.
The correct way to unlock the Galaxy S10 ultrasound! Please don’t press the screen, you just need to touch the screen gently, it will be unlocked soon, don’t press the screen! This will be slower. By the way, you can quickly unlock it without lighting the screen. pic.twitter.com/3cfXLuOcNZ — Ice universe (@UniverseIce) February 26, 2019 Now if you think all these features we’ve been mentioning and discussing up there are pretty cool, now check out the next best feature of the Samsung Galaxy S10 below.
6. The Headphone Jack… Yay.. 😂😂🎧
Yes, finally the Samsung Galaxy S10 smart phones came with a 3.5mm headphone jack even after some rumors went viral that Samsung is finally ditching it, but it turned out those rumors were actually false. But for now we can’t tell if those rumors will still actually remain false on the next Galaxy Note device or the next Samsung Galaxy S flagships. But before then, let’s enjoy the moment.. Right?
7. Up To 12GB of RAM and 1TB of Storage Now as you’d already know before now, the Galaxy S10 devices all come with a base storage of 128GB of storage for all models and 8GB of RAM for the S10 and S10 Plus devices, while the lower speced Galaxy S10E starts with a base RAM of 6GB. But even with these much storage on a single phone, on the Galaxy S10 Plus, you can take those numbers up a notch by getting yourself the variant with 12GB of RAM and a freaking 1TB of storage. Now not only is this storage too much and might not even get filled up throughout the days, months and years that you’ll be making use of the phone, but Samsung still deemed it fit to include space for an expandable storage on all three models, of up to an additional 512GB of storage.. Now that’s what i call gangster.. Deal with it Apple.
8. Triple Camera Setup Though the triple camera thing is not really that new as a lot of other smart phones out there have been making use of it for ages, Huawei for example. Even the Samsung in question has introduced it a couple of times on some of their Galaxy A smart phones, and there was even a time they had to make use of four cameras on one of their smart phones.
But this is the first time that we’re gonna be seeing a triple camera setup on a Samsung Galaxy S smart phone, and it just turned out to be the best so far. One is a normal wide angle lens, the second is a telephoto, and the last one is an Ultra wide angle lens that helps you fits in and capture more objects and people in a single frame. Though this particular setup too, is not new, Huawei and LG has been doing it for some time now with the LG V40 ThinQ and the Huawei Mate 20, but it just feels so good it finally made it to our own.
Now apart from just three lenses at the back, the camera app itself has quite a lot of features that will make photos appear the way you would want them to be, such as the new Bright Night feature, which as you’d already guessed, is similar to Google Pixels Night Shot feature, and it also has some AI scene optimizers going on in it as well, but we’ll be discussing more of that in our full review, so stay tuned and make sure you subscribe to our newsletter, in order to get notified of it when it drops..#Cheers.
9. Corning Gorilla Glass 6 Front Protection Oh Yes.. the Galaxy S10 has the latest Corning Gorilla Glass 6 protection on it’s front, while the back stays put at 5. Corning says it’s almost twice as durable as the Gorilla Glass 5, making it idea for people like me, who doesn’t like putting screen protectors on their smart phones.
10. The Latest 7nm/8nm Processors With the S10, you’re getting the new Snapdragon 855 7nm processor, or Samsung’s own Exynos 9820 8nm chip depending on your region. It features a completely different cluster architecture over its predecessor, which means the performance is significantly improved. Also this new processor consumes less battery, so probably the user experience is gonna be significantly better compared to last year’s devices.
11. Large Batteries @ 4100mAh On the Galaxy S10 Plus device, you’re getting a pretty large 4100mAh power battery. Now not only is this battery large, but it is also larger than last year’s Galaxy Note 9 device with a 100mAh increase in capacity. Another intriguing thing about this battery is that, even with it being larger than what you’d get on a Note 9, the overall size and weight of the S10 Plus is still significantly lower and feels more compact than the Note 9 in question. Thanks to those significantly reduced top and bottom bezels. But as for the weight, i don’t know how they managed to pull that stunt.
So that is all for today guys, and as you’ve read, you’ll agree with me that the Samsung Galaxy S10 devices are pretty capable and solid devices, and they sure do lived up to their hypes. Honestly i can’t wait to get my hands on one of them, probably the S10 model with a single front facing camera, i just love how small and compact the phone appears even while still packing a 6.1 Inches display. So let us know what you think about these phones in the comments section below, support us by subscribing to our newsletter for free, and as always, I’ll see you on the next one. #Peace #Cheers… emmanuelGodwin

This content was originally published here.

Samsung Science & Technology Foundation Announces Grants for Basic Science and Future Technologies – Samsung Global Newsroom

Funding to help advance basic science and research for new technologies
Selected 1H 2019 projects include AI, machine learning and environment research

Samsung Science & Technology Foundation today announced 44 research projects chosen for funding in the first half of 2019, which include pioneering research in physics and life sciences, new engineering solutions for the clean environment as well as breakthrough projects in artificial intelligence (AI).

The foundation was established in 2013 to help foster development of basic science, materials engineering and information technologies, with a total of KRW 1.5 trillion in grants to be endowed on research projects for 10 years. It has so far provided about KRW 667 billion in funding for 517 research projects from universities and public research institutes in Korea.

The foundation will look to expand its support in the areas of future technologies, such as AI, Internet of Things (IoT) and 5G, as well as projects that aim to improve local communities.

“Our goal is to support building the foundation for future science and technology,” said Seong-Keun Kim, Chairman of Samsung Science & Technology Foundation. “We hope to make a lasting contribution to the society by advancing basic science, bringing innovations and redefining the technology industry.”

Mr. Kim, Professor of Chemistry at Seoul National University and Fellow of the Royal Society of Chemistry, was elected as the new chairman this week by the foundation’s Board of Directors.

For details of the 1H 2019 project selections, please refer to the original Korean announcement at https://news.samsung.com/kr/?p=391301.

This content was originally published here.

This Smartphone App Uses Artificial Intelligence To Help Blind People ‘See’ Like Never Before

Seeking AI defines itself as a free app that narrates the world around you. Developed by Microsoft, this AI-driven research project is designed for the visually impaired to better grasp people, objects and text around them.

This content was originally published here.

Assam: Scientists working to plug gaps in Covid-19 testing | Guwahati News – Times of India

GUWAHATI: In a major technological breakthrough research by Indian scientists, a research team comprising of experts from the North Eastern Hill University (NEHU), Shillong and Adamas University, Kolkata has expedited works to plug gaps in Covid-19 testing by developing ‘less-harmful’ and ‘cost-effective’ Terahertz radiation (T-Ray) Thermography device.
This is going to be a potential alternative to Thermal Infrared scanners and CT imaging for early detection and safe monitoring of nCovid-19 patients. Researchers Moumita Mukherjee, Associate Dean of Adamas University (Kolkata), who was formerly associated with the Defence Research Development Organisation (DRDO) and Dinesh Bhatia, Associate Professor at the Biomedical Engineering Department of North Eastern Hill University (NEHU), Shillong and their collaborative research group is working actively in developing Artificial Intelligence (AI) based T-Ray scanning unit to address the limitation of presently available infrared thermal scanner in accurate and early detection of nCOVID-19 patients.
In a statement, Mukherjee and Bhatia, said that the unique absorption fingerprint of T-Ray radiation in lungs and the contrast thermal image of affected and healthy lungs will help the doctors and the paramedical staff to identify such cases at an early stage, when the patient is apparently asymptomatic and not showing any virus symptoms. Bhatia is helping in the analysis and extraction of biomedical images by incorporating Artificial Intelligence, while Mukherjee is looking after the design and implementation of the device.
“The product will be cost-effective, allowing quick diagnosis with accurate in screening and monitoring of mass population. Their extensive research is showing a ray of hope in easy identification followed by safe monitoring of nCovid-19 patients worldwide. They acknowledge the support of their respective institutions for carrying out this research study,” read the statement.
With highly limited supply of Covid-19 test-kits in India and rest of the world, Bhatia said that people with mild symptoms are less likely to be tested. This, he said leaves many people in the dark as to whether cold-like symptoms are just the sniffles, or a mild case of novel coronavirus, making them potential source of spreading the contagious virus in the society.

“Thermal screening or Infrared based devices which are presently being adopted for temperature sensing at airports, railway stations and for surveillance in institutions have major limitations in accurately identifying asymptomatic individuals carrying the virus and such cases go undetected for days,” added Bhatia. He said that the potential of non-ionizing Terahertz radiation (T-Ray) imaging applied for biomedical domain is presently a new field of research worldwide and claimed that the application of Terahertz imaging tool in such investigations has not yet been employed by any research group.
“Since Terahertz is non-ionizing in nature, its repetitive use in scanning and imaging for screening and monitoring will be harmless to the population and its users such doctors, paramedical team and other security staff in the vicinity in comparison to using X Ray or CT scan device, as both (X Ray or CT scan device) are considered to be ionizing in nature and can cause cancer if repeatedly used for testing on nCOVID patients in future,” he said.
To detect the virus at an early stage and help in isolating such individuals by following the principle of social distancing or self-quarantine at their homes for a period of fourteen to twenty days may help in preventing spread of this severe communicable disease. Nevertheless the test for SARS-CoV-2 or COVID-19 are currently in the development stage and are awaiting approval by different regulatory agencies in the form of a EUA or CE-IVD certification.
The researchers said that the use of T-Ray device will be more effective in getting desired reliable information compared to the existing thermal scanners and is considered as the most exciting application offered by the terahertz technology due to its size, ease, cost-effectiveness, and portability of terahertz-imaging unit to allow mass screening.

  • Download

    The Times of India News App for Latest City News

  • Subscribe

    Start Your Daily Mornings with Times of India Newspaper! Order Now

This content was originally published here.

Artificial intelligence: Are we ready for artificial intelligence?

There is no doubt AI will transform society, and there is a big need to safeguard against improper use.

This content was originally published here.

PVAMU Alumnus Donates to IBM Artificial Intelligence Academy – PVAMU Home

The IBM Skills Academy is designed for academia worldwide. The program helps Prairie View A&M University faculty to provide students with additional skills, giving them an advantage in the job market.

This content was originally published here.

With MorphNet, Google Helps You Build Faster and Smaller Neural Networks

Designing deep neural networks these days is more art than science. In the deep learning space, any given problem can be addressed with a fairly large number of neural network architectures. In that sense, designing a deep neural network from the ground up for a given problem can result incredibly expensive in terms of time and computational resources. Additionally, given the lack of guidance in the space, we often end up producing neural network architectures that are suboptimal for the task at hand. Recently, artificial intelligence(AI) researchers from Google published a paper proposing a method called MorphNet to optimize the design of deep neural networks.
Automated neural network design is one of the most active areas of research in the deep learning space. The most traditional approach to neural network architecture design involves sparse regularizers using methods such as L1. While this technique has proven effective on reducing the number of connections in a neural network, quite often ends up producing suboptimal architectures. Another approach involves using search techniques to find an optimal neural network architecture for a given problem. That method has been able to generate highly optimized neural network architectures but it requires an exorbitant number of trial and error attempts which often results computationally prohibited. As a result, neural network architecture search has only proven effective in very specialized scenarios. Factoring the limitations of the previous methods, we can arrive to three key characteristics of effective automated neural network design techniques:
a) Scalability: The automated design approach should be scalable to large datasets and models.
b) Multi-Factor Optimization: An automated method should be able to optimized the structure of a deep neural network targeting specific resources.
c) Optimal: An automated neural network design should produce an architecture that improves performance while reducing the usage of the target resource.

MorphNet

Google’s MorphNet approaches the problem of automated neural network architecture design from a slightly different angle. Instead of trying to try numerous architectures across a large design space, MorphNet start with an existing architecture for a similar problem and, in one shot, optimize it for the task at hand.
MorphNet optimizes a deep neural network by interactively shrinking and expanding its structure. In the shrinking phase, MorphNet identifies inefficient neurons and prunes them from the network by applying a sparsifying regularizer such that the total loss function of the network includes a cost for each neuron. Just doing this typically results on a neural network that consumes less of the targeted resource, but typically achieves a lower performance. However, MorphNet applies a specific shrinking model that not only highlights which layers of a neural network are over-parameterized, but also which layers are bottlenecked. Instead of applying a uniform cost per neuron, MorphNet calculates a neuron cost with respect to the targeted resource. As training progresses, the optimizer is aware of the resource cost when calculating gradients, and thus learns which neurons are resource-efficient and which can be removed.
The shrinking phase of MorphNet is useful to produce a neural network that optimizes the cost for a specific resource. However, that optimization could come at the cost of accuracy. That’s precisesly why MorphNet uses an expanding phase based on a width multiplier to expand the sizes of all layers. For example, an expansion of 50% will cause inefficient layer that started with 100 neurons and shrank to 10 would only expand back to 15, while an important layer that only shrank to 80 neurons might expand to 120 and have more resources with which to work. The net effect is re-allocation of computational resources from less efficient parts of the network to parts of the network where they might be more useful.
The combination of the shrinking and expanding phases produces a neural network that is more accurate than the original while is still somewhat optimized for a specific resource.
In this initial iteration, there are several areas in which MorphNet can deliver immediate value of neural network architectures.
· Targeted Regularization: MorphNet optimizes the structure of a deep neural network focusing on the reduction of a specific resource. Conceptually, this model provides a more targeted approach than traditional regularization techniques. The following figure represents a traditional RestNet-101 architecture optimized by MorphNet using two criterials: FLOPs and model size. The structures generated by MorphNet when targeting FLOPs (center, with 40% fewer FLOPs) or model size (right, with 43% fewer weights) are dramatically different. When optimizing for computation cost, higher-resolution neurons in the lower layers of the network tend to be pruned more than lower-resolution neurons in the upper layers. When targeting smaller model size, the pruning tradeoff is the opposite.
· Topology Morphing: Some optimizations created by MorphNet might produce completely new topologies. For instance, when a layer has 0 neurons, MorphNet might effectively change the topology of the network by cutting the affected branch from the network. Let’s look at the following figure which again shows changes on a RestNet architecture. In that example, MorphNet might keep the skip-connection but remove the residual block as shown below (left). For Inception-style architectures, MorphNet might remove entire parallel towers as shown on the right.
· Scalability: One of the greatest advantages of MorphNet, is that it can learn a new structure in a single training run which minimizes the computational resources needed for training and can scale to very complex architectures.
· Portability: The networks produced by MorphNet are technically portable and can be retrained from scratch as the weights are not tied to the learning procedure.
Google applied MorphNet to a variety of scenarios including Inception V2 trained on ImageNet using FLOP optimizations. Contrasting with traditional regularization approaches that focus on scaling down the number of outputs, the MorphNet approach targets FLOPs directly and produces a better trade-off curve when shrinking the model (blue). In this case, FLOP cost is reduced 11% to 15% with the same accuracy as compared to the baseline.

Using MorphNet

Google released an open source version of MorphNet on GitHub . In a nutshell, using MorphNet consists on the following steps:
1) Choose a regularizer from morphnet.network_regularizers and initialized it using a specific optimization metric. The current implementation of MorphNet includes several regularization algorithms.
2) Train the target model.
3) Save the proposed model structure with the StructureExporter.
4) Retrain the model from scratch without the MorphNet regularizer.
The following code illustrates those steps:

“`
from morphnet.networkregularizers import flopregularizer
from morph
net.tools import structure_exporter

logits = build_model()

networkregularizer = flopregularizer.GammaFlopsRegularizer(
[logits.op], gammathreshold=1e-3)
regularization
strength = 1e-10
regularizerloss = (networkregularizer.getregularizationterm() * regularization_strength)

modelloss = tf.nn.sparsesoftmaxcrossentropywithlogits(labels, logits)

optimizer = tf.train.MomentumOptimizer(learning_rate=0.01, momentum=0.9)

trainop = optimizer.minimize(modelloss + regularizer_loss)
“`

Automated neural network architecture design is a key area to make deep learning more mainstream. The best neural network architectures are those produced using a combination of human programmers and machine learning algorithms. MorphNet brings a very innovative angle to this new hot space of the deep learning ecosystem.

This content was originally published here.

Clinical Decision Support in the Era of Artificial Intelligence | Clinical Decision Support | JAMA | JAMA Network

This Viewpoint discusses the potential capabilities and challenges of decision support systems that are designed to be used interactively by clinicians.

This content was originally published here.

Amazon: New drone will start deliveries ‘within months’ | Retail Dive

Dive Brief:

Dive Insight:

Amazon’s gung-ho attitude toward speed takes another step with the introduction of its latest drone. “Can we deliver packages to customers even faster? We think the answer is yes,” Amazon Worldwide Consumer​ CEO Jeff Wilke wrote in a blog post about the new drone, adding one of the pathways to “faster” is through drone technology.

The e-tailer recently raised the bar for speedy logistics with the introduction of one-day delivery. “We’re able to do this because we spent 20 plus years expanding our fulfillment and logistics network,” Amazon CFO Brian Olsavsky said at the time of the announcement. Likewise, Amazon said it plans to use its “world-class fulfillment and delivery network” to scale up Prime Air and make deliveries using the drone within months.

Amazon Worldwide Consumer​ CEO Jeff Wilke stands beside the new drone at re:MARS this week in Las Vegas.

Amazon touted the high-tech and safety features of its newest (and so far nameless) drone. The device has a hybrid design, meaning it takes off and lands vertically like a helicopter and flies horizontally like an airplane. It uses artificial intelligence, machine learning and computer vision to detect and adjust for moving objects.

Technology, however, has rarely been the barrier to widespread drone deployment. Rather, strict regulations around U.S. air space have challenged companies to develop drones that are innovative but also in compliance.

The Federal Aviation Administration (FAA) sees a future with robust commercial drone usage. It projected these devices could triple by 2023. The agency has launched initiatives to help facilitate the growth of the drone market and last month, the FAA granted permission for Alphabet’s Wing Aviation to begin drone delivery, the first time a U.S. company has been given clearance to deliver goods by drone.

This content was originally published here.