Join JAAGNet and Group

SIgn up for JAAGNet & the Artificial Intelligence Group its FREE!!

 

Member Benefits:
_____________________

 

Again signing up for JAAGNet & Group Membership is FREE and will only take a few moments!

Here are some of the benefits of Signing Up:

  • Ability to join and comment on all the JAAGNet Domain communities.
  • Ability to Blog on all the Domain communities 
  • Visibility to more pages and content at a group community level, such as Community, Internet, Social and Team Domain Community Feeds.
  • Make this your only content hub and distribute your blogs to LinkedIn, Reddit, Facebook, Twitter, WhatsApp, Messenger, Xing, Skype, WordPress Blogs, Pinterest, Email Apps and many, many more (100+) social network and feed sites. 
  • Opportunity to collaborate (soon to be  released) with various JAAGNet Business communities and other JAAGNet Network members.
  • Connect (become friends), Follow (and be Followed) and Network with JAAGNet members with similar interests.
  • Your Content will automatically be distributed on Domain and JAAGNet Community Feeds. Which are widely distributed by the JAAGNet team.

Join Us!

All Posts (732)

Silver Level Contributor

The Las Vegas service consists of Motional robotaxis, operating on the Lyft network

Service of the publicly available autonomous fleet run by Motional and Lyft was paused earlier this year due to Covid-19 but has resumed with enhanced protective measures.

Motional has resumed its self-driving mobility service with Lyft in Las Vegas after pausing operations earlier this year due to the Covid-19 pandemic.

With the addition of enhanced protective measures, passengers can now ride in new Motional-branded robotaxis, operating on Lyft’s network.

Frequently sanitised

“We’ve put extensive measures in place to keep our fleet thoroughly and frequently sanitised, and our passengers safe and healthy,” said Karl Iagnemma, president and CEO, Motional.

“We’re thrilled to bring the fleet back, and very proud of its place in history. It’s the longest-standing service of its kind, responsible for introducing self-driving cars to hundreds of thousands of people.”

The service resumes as consumers demand new transportation options, according to Motional. A recent report finds 70 per cent of Americans say coronavirus infection is a real concern impacting their transportation decisions, and one-in-five are more interested in driverless vehicles than they were before the pandemic.

The new protective measures follow CDC, World Health Organisation, and government guidelines, and include a partition between the front and rear seats, vehicle operator PPE, and vehicle sanitisation at the start of each shift, the end of each day, and between rides. 

“In these turbulent times, it’s energising to see the continued innovation made by both Motional and Lyft as we push towards delivering self-driving cars at scale,” said Nadeem Sheikh, VP autonomous vehicles programmes at Lyft. “Getting this fleet back up and running is a significant jumping-off point as we prepare to launch a robust set of new features for the self-driving fleet.” 

Motional claims its fleet, which launched in 2018, has provided over 100,000 rides to paying members of the public, with 98 per cent of passengers awarding their ride a five-star rating.

Motional is a joint venture between Hyundai Motor Group and Aptiv, a provider of advanced safety, electrification, and vehicle connectivity solutions.

Headquartered in Boston, Motional has operations in the US and Asia.

Originally published by
SmartCitiesWorld News Team | October 23, 2020
SmartCitiesWorld

Read more…
Silver Level Contributor

AI projects seem to be everywhere in industry, but a new study of 3,000 companies finds there's little financial ROI. The authors call for expanding organizational learning to improve returns.(Pixabay)

Despite widespread use of artificial intelligence, only 11% of companies say they see a significant financial return on their investment, according a new study by Boston Consulting Group in partnership with MIT Sloan Management Review.

The low yield was revealed in a global survey of more than 3,000 managers with 57% saying they have piloted or deployed AI, a significant increase over three years ago.

The authors of the study reported that companies can get the basics of AI right with the right data, technology, talent and strategy, but still see low ROI.  “Only when organizations add the ability to learn with AI do significant benefits become likely,” the authors said.  “With organizational learning, the odds of an organization reporting significant financial benefits increases to 73%.”

The elements of learning with AI require companies to have a combination of machines learning autonomously, humans teaching machines and machines teaching humans.  Deploying the appropriate interaction modes for human-machine interactions is “critical,” according to the report.

Organizations need to make extensive changes to many processes with AI to receive the best ROI.  Sometimes the changes will even be” uncomfortable,” as the authors put it.  “Organizational learning with AI demands, builds on and leads to significant organizational change.”

The report describes the experience of several organizations that have deployed AI. They include Repsol, a global energy and utility in Spain with more than 100 digital transformation projects that use AI in some manner, from drilling operations to retail service stations. AI is used to optimize the drilling of productive wells, reducing nonproductive time by up to 50% across 30 sites. AI is also used to prepare personalized offers for 8 million customers at 5,000 service stations with 400,000 offers each day with an increase in sales.  The improvement in sales equals the value of having more service stations, up to 4.5%.

“Repsol does more than teach machines to drill, blend and serve,” the report says. “In effect, Repsol changed its processes to continuously learn with AI. Process improvements beget new behaviors and new human knowledge which is then fed back to machines. These dynamics play out continuously. Repsol’s ability to learn with AI is fundament to obtaining significant benefits with AI.

The survey’s finding of 57% of companies having AI pilots or deployed solutions in 2020 is up from 44% in 2018.   Also, 59% said they have an AI strategy, up from 39% in 2017.

Originally published by
Matt Hamblen | October 20, 2020
Fierce Electronics

The full report is online. 

 

 

Read more…
Standard

Image: Speech bubble - Credit: Photo by Volodymyr Hryshchenko on Unsplash

Researchers have used artificial intelligence to reduce the 'communication gap' for nonverbal people with motor disabilities who rely on computers to converse with others.

The team, from the University of Cambridge and the University of Dundee, developed a new context-aware method that reduces this communication gap by eliminating between 50% and 96% of the keystrokes the person has to type to communicate.

The system is specifically tailed for nonverbal people and uses a range of context ‘clues’ – such as the user’s location, the time of day or the identity of the user’s speaking partner – to assist in suggesting sentences that are the most relevant for the user.

Nonverbal people with motor disabilities often use a computer with speech output to communicate with others. However, even without a physical disability that affects the typing process, these communication aids are too slow and error-prone for meaningful conversation: typical typing rates are between five and 20 words per minute, while a typical speaking rate is in the range of 100 to 140 words per minute.

“This difference in communication rates is referred to as the communication gap,” said Professor Per Ola Kristensson from Cambridge’s Department of Engineering, the study’s lead author. “The gap is typically between 80 and 135 words per minute and affects the quality of everyday interactions for people who rely on computers to communicate.”

The method developed by Professor Kristensson and his colleagues uses artificial intelligence to allow a user to quickly retrieve sentences they have typed in the past. Prior research has shown that people who rely on speech synthesis, just like everyone else, tend to reuse many of the same phrases and sentences in everyday conversation. However, retrieving these phrases and sentences is a time-consuming process for users of existing speech synthesis technologies, further slowing down the flow of conversation.

In the new system, as the person is typing, the system uses information retrieval algorithms to automatically retrieve the most relevant previous sentences based on the text typed and the context the conversation the person is involved in. Context includes information about the conversation such as the location, time of day, and automatic identification of the speaking partner’s face. The other speaker is identified using a computer vision algorithm trained to recognise human faces from a front-mounted camera.

The system was developed using design engineering methods typically used for jet engines or medical devices. The researchers first identified the critical functions of the system, such as the word auto-complete function and the sentence retrieval function. After these functions had been identified, the researchers simulated a nonverbal person typing a large set of sentences from a sentence set representative of the type of text a nonverbal person would like to communicate.

This analysis allowed the researchers to understand the best method for retrieving sentences and the impact of a range of parameters on performance, such as the accuracy of word-auto complete and the impact of using many context tags. For example, this analysis revealed that only two reasonably accurate context tags are required to provide the majority of the gain. Word-auto complete provides a positive contribution but is not essential for realising the majority of the gain. The sentences are retrieved using information retrieval algorithms, similar to web search. Context tags are added to the words the user types to form a query.

The study is the first to integrate context-aware information retrieval with speech-generating devices for people with motor disabilities, demonstrating how context-sensitive artificial intelligence can improve the lives of people with motor disabilities.

“This method gives us hope for more innovative AI-infused systems to help people with motor disabilities to communicate in the future,” said Professor Kristensson. “We’ve shown it’s possible to reduce the opportunity cost of not doing innovative research with AI-infused user interfaces that challenge traditional user interface design mantra and processes.”

Originally published by
Department of Engineering | October 19, 2020
Cambridge University

The research paper was published at CHI 2020.

The research was funded by the Engineering and Physical Sciences Research Council.

Reference:
Kristensson, P.O., Lilley, J., Black, R. and Waller, A. ‘A design engineering approach for quantitatively exploring context-aware sentence retrieval for nonspeaking individuals with motor disabilities.’ In Proceedings of the 38th ACM Conference on Human Factors in Computing Systems (CHI 2020). DOI: 10.1145/3313831.3376525

 

Read more…
Gold Level Contributor

On the left, an image of the Agarwal group’s device, a single layer of tungsten disulfide (WS2) on a periodically patterned photonic crystal. Strong coupling between the excitons of WS2 with the photonic crystal leads to the formation of exciton-photon polaritons with helical topological properties. On the right, the bright spot is circularly polarized light exciting helical topological exciton-polaritons, which have a particular spin and propagate forward, bending around sharp corners with no backscattering. (Image: Penn Engineering Today)

New research from Penn Engineering describes a new type of 'quasiparticle' and topological insulator, opening up new opportunities and future applications into new photonic devices.

Quantum devices have the potential to revolutionize computing but there are still a number of practical limitations and hurdles. One challenge is that qubits, the basic unit of quantum information, are fragile, so quantum devices can only be used at extremely low temperatures.

To help address this challenge, researchers are evaluating new types of “quasiparticles,” phenomena that appear to have properties of different particles combined together. Finding ways to both achieve and control the right combination of properties, such as mass, speed, or direction of motion, would these types of particles to be more broadly used. Examples of these quasiparticles include excitons, which act like an electron bound to an empty space in a semiconducting material. Another example moving one step up is an exciton-polariton, one that combines the properties of an exciton with that of a photon and subsequently behaves like a combination of both matter and light.

Researchers at Penn’s School of Engineering and Applied Science have created a new and exotic form of an exciton-polariton, one that has a defined quantum spin that is locked to its direction of motion. In a study recently published in Science, the researchers found that, depending on the direction of the quasiparticle’s spin, these helical topological exciton-polaritons move in opposite directions along the surface of a newly developed topological insulator, materials with a conductive surface and an insulating core, that was also developed as part of this study. The research was led by Ritesh Agarwal, professor in the Department of Materials Science and Engineering, and Wenjing Liu, a postdoctoral researcher in his lab, in collaboration with researchers at Hunan University and George Washington University.

The researchers also found that this approach works at warmer, more user-friendly conditions, with this study conducted at 200 Kelvin, or roughly -100F, as compared to similar systems that operate at 4K, or roughly -450F. This opens up the possibility of using these new quasiparticles and topological insulators to transmit information or perform computations at unprecedented speeds. The researchers are confident that further research and improved fabrication techniques will allow their design to operate at room temperature.

Read more at Penn Engineering Today.

Originally published by
Penn Today | October 16, 2020
School of Engineering & Applied Science | Penn - University of Pennsylvania

 

Read more…
Standard

AI self-driving cars are likely to be relegated to the scrap heap sooner than conventional cars, due to their potential for 24x7 use and the rapid technological pace of the genre. (Credit: Getty Images)

Originally published by
Dr. Lance Eliot | October 15, 2020
AI Trends

After AI autonomous self-driving cars have been abundantly fielded onto our roadways, one intriguing question that has so far gotten scant attention is how long will those self-driving cars last.   

It is easy to simply assume that the endurance of a self-driving car is presumably going to be the same as today’s conventional cars, especially since most of the self-driving cars are currently making use of a conventional car rather than a special-purpose built vehicle. 

But there is something to keep in mind about self-driving cars that perhaps does not immediately meet the eye, namely, they are likely to get a lot of miles in a short period. Given that the AI is doing the driving, there is no longer a dampening on the number of miles that a car might be driven in any noted time period, which usually is based on the availability of a human driver. Instead, the AI is a 24 x 7 driver that can be used non-stop and attempts to leverage the self-driving car into a continuously moving and available ride-sharing vehicle. 

With all that mileage, the number of years of endurance is going to be lessened in comparison to a comparable conventional car that is driven only intermittently. You could say that the car is still the car, while the difference is that the car might get as many miles of use in a much shorter period of time and thus reach its end-of-life sooner (though nonetheless still racking up the same total number of miles). 

Some automotive makers have speculated that self-driving cars might only last about four years. 

This comes as quite a shocking revelation that AI-based autonomous cars might merely be usable for a scant four years at a time and then presumably end-up on the scrap heap. 

Let’s unpack the matter and explore the ramifications of a presumed four-year life span for self-driving cars. 

For my framework about AI autonomous cars, see the link here: https://aitrends.com/ai-insider/framework-ai-self-driving-driverless-cars-big-picture/ 

Why this is a moonshot effort, see my explanation here: https://aitrends.com/ai-insider/self-driving-car-mother-ai-projects-moonshot/ 

For more about the levels as a type of Richter scale, see my discussion here: https://aitrends.com/ai-insider/richter-scale-levels-self-driving-cars/ 

For the argument about bifurcating the levels, see my explanation here: https://aitrends.com/ai-insider/reframing-ai-levels-for-self-driving-cars-bifurcation-of-autonomy/ 

Life Span Of Cars 

According to various stats about today’s cars, the average age of a conventional car in the United States is estimated at 11.6 years old. 

Some tend to use the 11.6 years or a rounded 12 years as a surrogate for how long a car lasts in the U.S, though this is somewhat problematic to do since the average age is not the endpoint of a car and encapsulates a range of ages of cars, including a slew of cars that were retired at a much younger age and those that hang-on to a much older age. 

Indeed, one of the fastest-growing segments of car ages is the group that is 16 years or older, amounting to an estimated 81 million such cars by the year 2021. Of those 81 million cars, around one-fourth are going to be more than 25 years old. 

In short, cars are being kept around longer and longer. 

When you buy a new car, the rule-of-thumb often quoted by automakers is that the car should last about 8 years or 150,000 miles. 

This is obviously a low-ball kind of posturing, trying to set expectations so that car buyers will be pleased if their cars last longer. One supposes it also perhaps gets buyers into the mental mode of considering buying their next car in about eight years or so. 

Continuing the effort to consider various stats about cars, Americans drive their cars for about 11,000 miles per year. If a new car is supposed to last for 150,000 miles, the math then suggests that at 11,000 miles per year you could drive the car for 14 years (that’s 150,000 miles divided by 11,000 miles per year). 

Of course, the average everyday driver is using their car for easy driving such as commuting to work and driving to the grocery store. Generally, you wouldn’t expect the average driver to be putting many miles onto a car. 

What about those that are pushing their cars to the limit and driving their cars in a much harsher manner? 

Various published stats about ridesharing drivers such as Uber and Lyft suggest that they are amassing about 1,000 miles per week on their cars. If so, you could suggest that the number of miles per year would be approximately 50,000 miles. At the pace of 50,000 miles per year, presumably, these on-the-go cars would only last about 3 years, based on the math of 150,000 miles divided by 50,000 miles per year. 

In theory, this implies that a ridesharing car being used today will perhaps last about 3 years. 

For self-driving cars, most would agree that a driverless car is going to be used in a similar ridesharing manner and be on-the-road quite a lot. 

This seems sensible. To make as much money as possible with a driverless car, you would likely seek to maximize the use of it. Put it onto a ridesharing network and let it be used as much as people are willing to book it and pay to use it. 

Without the cost and hassle of having to find and use a human driver for a driverless car, the AI will presumably be willing to drive a car whenever and however long is needed. As such, a true self-driving car is being touted as likely to be running 24×7. 

In reality, you can’t actually have a self-driving car that is always roaming around, since there needs to be time set aside for ongoing maintenance of the car, along with repairs, and some amount of time for fueling or recharging of the driverless car. 

Overall, it would seem logical to postulate that a self-driving car will be used at least as much as today’s human-driven ridesharing cars, plus a lot more so since the self-driving car is not limited by human driving constraints. 

In short, if it is the case that today’s ridesharing cars are hitting their boundaries at perhaps three to five years, you could reasonably extend that same thinking to driverless cars and assume therefore that self-driving cars might only last about four years. 

The shock that a driverless car might only last four years is not quite as surprising when you consider that a true self-driving car is going to be pushed to its limits in terms of usage and be a ridesharing goldmine (presumably) that will undergo nearly continual driving time. 

For why remote piloting or operating of self-driving cars is generally eschewed, see my explanation here: https://aitrends.com/ai-insider/remote-piloting-is-a-self-driving-car-crutch/ 

To be wary of fake news about self-driving cars, see my tips here: https://aitrends.com/ai-insider/ai-fake-news-about-self-driving-cars/ 

The ethical implications of AI driving systems are significant, see my indication here: https://aitrends.com/selfdrivingcars/ethically-ambiguous-self-driving-cars/ 

Be aware of the pitfalls of normalization of deviance when it comes to self-driving cars, here’s my call to arms: https://aitrends.com/ai-insider/normalization-of-deviance-endangers-ai-self-driving-cars/ 

Factors Of Car Aging 

Three key factors determine how long a car will last, namely: 

  • How the car was built 
  • How the car is used 
  • How the car is maintained 

Let’s consider how those key factors apply to self-driving cars. 

In the case of today’s early versions of what are intended to be driverless cars, by-and-large most of the automakers are using a conventional car as the basis for their driverless car, rather than building an entirely new kind of car. 

We will eventually see entirely new kinds of cars being made to fully leverage a driverless car capability, but for right now it is easier and more expedient to use a conventional car as the cornerstone for an autonomous car. 

Therefore, for the foreseeable future, we can assume that the manner of how a driverless car was built is in keeping with how a conventional car is built, implying that the car itself will last as long as a conventional car might last. 

In terms of car usage, as already mentioned, a driverless car is going to get a lot more usage than the amount of driving by an average everyday driver and be used at least as much as today’s ridesharing efforts. The usage is bound to be much higher. 

The ongoing maintenance of a self-driving car will become vital to the owner of a driverless car. 

I say this because any shortcomings in the maintenance would tend to mean that the driverless car will be in the shop and not be as available on the streets. The revenue stream from an always-on self-driving car will be a compelling reason for owners to make sure that their self-driving car is getting the proper amount of maintenance. 

In that sense, the odds would seem to be the case that a driverless car will likely be better maintained than either an average everyday car or even today’s ridesharing cars. 

One additional element to consider for driverless cars consists of the add-ons for the sensory capabilities and the computer processing aspects. Those sensory devices such as cameras, radar, ultrasonic, LIDAR, and so on, need to be factored into the longevity of the overall car, and the same applies to the computer chips and memory on-board too. 

Why Retire A Car 

The decision to retire a car is based on a trade-off between trying to continue to pour money into a car that is breaking down and excessively costing money to keep afloat, versus ditching the car and opting to get a new or newer car instead. 

Thus, when you look at how long a car will last, you are also silently considering the cost of a new or newer car. 

We don’t yet know what the cost of a driverless car is going to be. 

If the cost is really high to purchase a self-driving car, you would presumably have a greater incentive to try and keep a used self-driving car in sufficient working order. 

There is also a safety element that comes to play in deciding whether to retire a self-driving car. 

Suppose a driverless car that is being routinely maintained is as safe as a new self-driving car, but eventually, the maintenance can only achieve so much in terms of ensuring that the driverless car remains as safe while driving on the roadways as would be a new or newer self-driving car. 

The owner of the used self-driving car would need to ascertain whether the safety degradation means that the used driverless car needs to be retired. 

Used Market For Self-Driving Cars 

With conventional cars, an owner that first purchased a new car will likely sell the car after a while. We all realize that a conventional car might end-up being passed from one buyer to another over its lifespan. 

Will there be an equivalent market for used self-driving cars? 

You might be inclined to immediately suggest that once a self-driving car has reached some point of no longer being safe enough, it needs to be retired. We don’t yet know, and no one has established what that safety juncture or threshold might be. 

There could be a used self-driving car market that involved selling a used driverless car that was still within some bounds of being safe. 

Suppose a driverless car owner that had used their self-driving car extensively in a downtown city setting opted to sell the autonomous car to someone that lived in a suburban community. The logic might be that the self-driving car no longer was sufficient for use in a stop-and-go traffic environment but might be viable in a less stressful suburban locale. 

Overall, no one is especially thinking about used self-driving cars, which is admittedly a concern that is far away in the future and therefore not a topic looming over us today. 

Retirement Of A Self-Driving Car 

Other than becoming a used car, what else might happen to a self-driving car after it’s been in use for a while? 

Some have wondered whether it might be feasible to convert a self-driving car into becoming a human-driven car, doing so to place the car into the used market for human-driven cars. 

Well, it depends on how the self-driving car was originally made. If the self-driving car has all of the mechanical and electronic guts for human driving controls, you could presumably unplug the autonomy and revert the car into being a human-driven car. 

I would assert that this is very unlikely, and you won’t see self-driving cars being transitioned into becoming human-driven cars. 

All told, it would seem that once a self-driving car has reached its end of life, the vehicle would become scrapped. 

If self-driving cars are being placed into the junk heap every four years, this raises the specter that we are going to have a lot of car junk piling up. For environmentalists, this is certainly disconcerting. 

Generally, today’s cars are relatively highly recyclable and reusable. Estimates suggest that around 80% of a car can be recycled or reused. 

For driverless cars, assuming they are built like today’s conventional cars, you would be able to potentially attain a similar recycled and reused parts percentage. The add-ons of the sensory devices and computer processors might be recyclable and reusable too, though this is not necessarily the case depending upon how the components were made. 

For why remote piloting or operating of self-driving cars is generally eschewed, see my explanation here: https://aitrends.com/ai-insider/remote-piloting-is-a-self-driving-car-crutch/ 

To be wary of fake news about self-driving cars, see my tips here: https://aitrends.com/ai-insider/ai-fake-news-about-self-driving-cars/ 

The ethical implications of AI driving systems are significant, see my indication here: https://aitrends.com/selfdrivingcars/ethically-ambiguous-self-driving-cars/ 

Be aware of the pitfalls of normalization of deviance when it comes to self-driving cars, here’s my call to arms: https://aitrends.com/ai-insider/normalization-of-deviance-endangers-ai-self-driving-cars/ 

Conclusion 

Some critics would be tempted to claim that the automakers would adore having self-driving cars that last only four years. 

Presumably, it would mean that the automakers will be churning out new cars hand-over-fist, doing so to try and keep up with the demand for an ongoing supply of new driverless cars. 

On the other hand, some pundits have predicted that we won’t need as many cars as we have today, since a smaller number of ridesharing driverless cars will fulfill our driving needs, abetting the need for everyone to have a car. 

No one knows. 

Another facet to consider involves the pace at which high-tech might advance and thus cause a heightened turnover in self-driving cars. Suppose the sensors and computer processors put into a driverless car are eclipsed in just a few years by faster, cheaper, and better sensors and computer processors. 

If the sensors and processors of a self-driving car are built-in, meaning that you can’t just readily swap them out, it could be that another driving force for the quicker life cycle of a driverless car might be as a result of the desire to make use of the latest in high-tech. 

The idea of retiring a driverless car in four years doesn’t seem quite as shocking after analyzing the basis for such a belief. 

Whether society is better off or not as a result of self-driving cars, and also the matter of those self-driving cars only lasting four years, is a complex question. We’ll need to see how this all plays out. 

Dr. Lance Eliot, CEO, Techbrium Inc. - techbrium.com - is a regular contributor as our AI Trends Insider, and serves as the Executive Director of the Cybernetic AI Self-Driving Car Institute and has  published 11 books on the future of driverless cars.  Follow Lance on Twitter @LanceEliot
Dr. Eliot can be reached at ai.selfdriving.cars@gmail.com

Read more…
Gold Level Contributor

What is artificial intelligence (AI)?

(monsiti/iStock/Getty Images Plus) AI exists all around us today, from supercomputers like Watson to predictive algorithms behind Google Maps and facial recognition programs.

Artificial intelligence (AI) is a difficult term to define because experts continue to argue about its definition. We’ll get into those arguments later, but for now, think of AI as the technology through which computers execute tasks that would normally require human intellect. Humans and animals have a natural intellect, but computers and other intelligent agents have artificial intelligence that engineers and scientists design.

AI differs from machine learning and deep learning, though the topics are related. Machine learning is a subcategory within AI in which a machine learns and performs functions it wasn’t specifically programmed to do (using what some argue to be logic). Deep learning is a subcategory of machine learning that allows machines to analyze multi-layer algorithms or neural networks. In a way, it mimics human thought.

The great debate

Alan Turing was a pioneer in the computing space. Many believe his theories about the creation of a ‘thinking computer’ in 1950 to have become the foundation of the field of AI as we know it today. Turing wondered if computers could think, but also argued for the necessity of empirical evidence to prove the achievement of that milestone.

He created the “Turing Test” based on a Victorian game called the imitation game to tease out that information. If a machine could perform as well as a human at the game, its performance would be considered equivalent to thought. Sounds simple, but to date, scientists have not been able to build a machine that comes close to passing the Turing Test/

Others, such as another founding father of AI John McCarthy (who was also considered to have coined the phrase Artificial Intelligence), argued a simpler definition of AI existed. McCarthy stated a machine could be thought of as having AI if it performed tasks, “which, when done by people, are said to involve intelligence.” And this broad definition, as you can imagine, still left room for plenty of debate.

Broadening the lens: General vs. narrow AI

AI technology has advanced significantly. While computers today cannot be said to think, according to Turing, our machines do all sorts of incredible things. IBM’s supercomputer Watson won a game of Jeopardy! We are nearing the market-release of autonomous vehicles. And, many consider the algorithms that recommend radio stations and products you might like based on based browsing history to be a true sign of artificial intelligence.

In order to acknowledge the existence of such advancements, the field of AI currently defines two types of technologies: General, or Strong, AI, and Narrow, or Weak, AI. General AI can be thought of as a machine capable of the kind of complex, multi-faceted thinking humans exhibit – in line with Turing’s definition. Narrow AI, on the other hand, encompasses the kind of AI we see today: smart machines that can carry out intelligent tasks that would otherwise require human intervention or ingenuity. And that kind of AI exists all around us. Not just in supercomputers like Watson, but also in predictive algorithms behind Google Maps, facial recognition programs, in systems that predict wear and tear in industry, and so much more.

Where AI is headed

Experts continue to debate how long it will be until General AI-powered devices co-exist in our world. Further, important conversations surrounding the ethics about how developers build this future are taking place. 

AI can be used for good, but it can also be weaponized. Ethics and quality control must become inherent to the development of AI programs. With this, if computers are not built with the capacity for things like empathy, some argue a future akin to the film Terminator will be waiting for us. While that future isn’t necessarily probable, it is possible. AI programmers have incredible power. And, to quote a great film, with that power comes great responsibility. 

Originally posted by
Cabe Atwell | October 14, 2020
Fierce Electronics

Sources:

https://www.brookings.edu/research/what-is-artificial-intelligence/

https://en.wikipedia.org/wiki/Artificial_intelligence#cite_note-Definition_of_AI-3

https://www.bbc.com/news/technology-18475646

https://blogs.oracle.com/bigdata/difference-ai-machine-learning-deep-learning

https://www.nytimes.com/2011/02/17/science/17jeopardy-watson.html

https://www.forbes.com/sites/cognitiveworld/2019/10/04/rethinking-weak-vs-strong-ai/#6152dd086da3

https://cisomag.eccouncil.org/hackers-using-ai/#:~:text=Artificial%20Intelligence%20as%20Security%20Solution%20and%20Weaponization%20by%20Hackers,-By&text=Artificial%20intelligence%20is%20a%20double,traits%20associated%20with%20human%20behaviors.

Read more…
Standard
Using a machine-learning approach that incorporates uncertainty, MIT researchers identified several promising compounds that target a protein required for the survival of the bacteria that cause tuberculosis.  Credits:Image: MIT News
 
Computational method for screening drug compounds can help predict which ones will work best against tuberculosis or other diseases.
 

Machine learning is a computational tool used by many biologists to analyze huge amounts of data, helping them to identify potential new drugs. MIT researchers have now incorporated a new feature into these types of machine-learning algorithms, improving their prediction-making ability.

Using this new approach, which allows computer models to account for uncertainty in the data they’re analyzing, the MIT team identified several promising compounds that target a protein required by the bacteria that cause tuberculosis.

This method, which has previously been used by computer scientists but has not taken off in biology, could also prove useful in protein design and many other fields of biology, says Bonnie Berger, the Simons Professor of Mathematics and head of the Computation and Biology group in MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL).

“This technique is part of a known subfield of machine learning, but people have not brought it to biology,” Berger says. “This is a paradigm shift, and is absolutely how biological exploration should be done.”

Berger and Bryan Bryson, an assistant professor of biological engineering at MIT and a member of the Ragon Institute of MGH, MIT, and Harvard, are the senior authors of the study, which appears today in Cell Systems. MIT graduate student Brian Hie is the paper’s lead author.

Better predictions

Machine learning is a type of computer modeling in which an algorithm learns to make predictions based on data that it has already seen. In recent years, biologists have begun using machine learning to scour huge databases of potential drug compounds to find molecules that interact with particular targets.

One limitation of this method is that while the algorithms perform well when the data they’re analyzing are similar to the data they were trained on, they’re not very good at evaluating molecules that are very different from the ones they have already seen.

To overcome that, the researchers used a technique called Gaussian process to assign uncertainty values to the data that the algorithms are trained on. That way, when the models are analyzing the training data, they also take into account how reliable those predictions are.

For example, if the data going into the model predict how strongly a particular molecule binds to a target protein, as well as the uncertainty of those predictions, the model can use that information to make predictions for protein-target interactions that it hasn’t seen before. The model also estimates the certainty of its own predictions. When analyzing new data, the model’s predictions may have lower certainty for molecules that are very different from the training data. Researchers can use that information to help them decide which molecules to test experimentally.

Another advantage of this approach is that the algorithm requires only a small amount of training data. In this study, the MIT team trained the model with a dataset of 72 small molecules and their interactions with more than 400 proteins called protein kinases. They were then able to use this algorithm to analyze nearly 11,000 small molecules, which they took from the ZINC database, a publicly available repository that contains millions of chemical compounds. Many of these molecules were very different from those in the training data.

Using this approach, the researchers were able to identify molecules with very strong predicted binding affinities for the protein kinases they put into the model. These included three human kinases, as well as one kinase found in Mycobacterium tuberculosis. That kinase, PknB, is critical for the bacteria to survive, but is not targeted by any frontline TB antibiotics.

The researchers then experimentally tested some of their top hits to see how well they actually bind to their targets, and found that the model’s predictions were very accurate. Among the molecules that the model assigned the highest certainty, about 90 percent proved to be true hits — much higher than the 30 to 40 percent hit rate of existing machine learning models used for drug screens.

The researchers also used the same training data to train a traditional machine-learning algorithm, which does not incorporate uncertainty, and then had it analyze the same 11,000 molecule library. “Without uncertainty, the model just gets horribly confused and it proposes very weird chemical structures as interacting with the kinases,” Hie says.

The researchers then took some of their most promising PknB inhibitors and tested them against Mycobacterium tuberculosis grown in bacterial culture media, and found that they inhibited bacterial growth. The inhibitors also worked in human immune cells infected with the bacterium.

A good starting point

Another important element of this approach is that once the researchers get additional experimental data, they can add it to the model and retrain it, further improving the predictions. Even a small amount of data can help the model get better, the researchers say.

“You don’t really need very large data sets on each iteration,” Hie says. “You can just retrain the model with maybe 10 new examples, which is something that a biologist can easily generate.”

This study is the first in many years to propose new molecules that can target PknB, and should give drug developers a good starting point to try to develop drugs that target the kinase, Bryson says. “We’ve now provided them with some new leads beyond what has been already published,” he says.

The researchers also showed that they could use this same type of machine learning to boost the fluorescent output of a green fluorescent protein, which is commonly used to label molecules inside living cells. It could also be applied to many other types of biological studies, says Berger, who is now using it to analyze mutations that drive tumor development.

The research was funded by the U.S. Department of Defense through the National Defense Science and Engineering Graduate Fellowship; the National Institutes of Health; the Ragon Institute of MGH, MIT, and Harvard’ and MIT’s Department of Biological Engineering.

Originally published by
Anne Trafton | October 15, 2020
MIT News Office - MIT

Read more…
Standard

To spot a deep fake, researchers looked for inconsistencies between “visemes,” or mouth formations, and “phonemes,” the phonetic sounds.

One year ago, Maneesh Agrawala of Stanford helped develop a lip-sync technology that allowed video editors to almost undetectably modify speakers’ words. The tool could seamlessly insert words that a person never said, even mid-sentence, or eliminate words she had said. To the naked eye, and even to many computer-based systems, nothing would look amiss.

The tool made it much easier to fix glitches without re-shooting entire scenes, as well as to tailor TV shows or movies for different audiences in different places.

But the technology also created worrisome new opportunities for hard-to-spot deep-fake videos that are created for the express purpose of distorting the truth.  A recent Republican video, for example, used a cruder technique to doctor an interview with Vice President Joe Biden.

This summer, Agrawala and colleagues at Stanford and UC Berkeley unveiled an AI-based approach to detect the lip-sync technology. The new program accurately spots more than 80 percent of fakes by recognizing minute mismatches between the sounds people make and the shapes of their mouths.

But Agrawala, the director of Stanford’s Brown Institute for Media Innovation and the Forest Baskett Professor of Computer Science, who is also affiliated with the Stanford Institute of Human-Centered Artificial Intelligence, warns that there is no long-term technical solution to deep fakes.

The real task, he says, is to increase media literacy to hold people more accountable if they deliberately produce and spread misinformation.

“As the technology to manipulate video gets better and better, the capability of technology to detect manipulation will get worse and worse,” he says. “We need to focus on non-technical ways to identify and reduce disinformation and misinformation.”

The manipulated video of Biden, for example, was exposed not by the technology but rather because the person who had interviewed the vice president recognized that his own question had been changed.

How Deep Fakes Work

There are legitimate reasons for manipulating video. Anyone producing a fictional TV show, a movie or a commercial, for example, can save time and money by using digital tools to clean up mistakes or tweak scripts.

The problem comes when those tools are intentionally used to spread false information. And many of the techniques are invisible to ordinary viewers.

Many deep-fake videos rely on face-swapping, literally super-imposing one person’s face over the video of someone else. But while face-swapping tools can be convincing, they are relatively crude and usually leave digital or visual artifacts that a computer can detect.

Lip-sync technologies, on the other hand, are more subtle and thus harder to spot. They manipulate a much smaller part of the image, and then synthesize lip movements that closely match the way a person’s mouth really would have moved if he or she had said particular words. With enough samples of a person’s image and voice, says Agrawala, a deep-fake producer can get a person to “say” anything.

Spotting the Fakes

Worried about unethical uses of such technology, Agrawala teamed up on a detection tool with Ohad Fried, a postdoctoral fellow at Stanford; Hany Farid, a professor at UC Berkeley’s School of Information; and Shruti Agarwal, a doctoral student at Berkeley.

The basic idea is to look for inconsistencies between “visemes,” or mouth formations, and “phonemes,” the phonetic sounds. Specifically, the researchers looked at the person’s mouth when making the sounds of a “B,” “M,” or “P,” because it’s almost impossible to make those sounds without firmly closing the lips.

The researchers first experimented with a purely manual technique, in which human observers studied frames of video. That worked well but was both labor-intensive and time-consuming in practice.

The researchers then tested an AI-based neural network, which would be much faster, to make the same analysis after training it on videos of former President Barack Obama. The neural network spotted well over 90 percent of lip-syncs involving Obama himself, though the accuracy dropped to about 81 percent in spotting them for other speakers.

A Real Truth Test

The researchers say their approach is merely part of a “cat-and-mouse” game. As deep-fake techniques improve, they will leave even fewer clues behind.

In the long run, Agrawala says, the real challenge is less about fighting deep-fake videos than about fighting disinformation. Indeed, he notes, most disinformation comes from distorting the meaning of things people actually have said.

“Detecting whether a video has been manipulated is different from detecting whether the video contains misinformation or disinformation, and the latter is much, much harder,” says Agrawala.

“To reduce disinformation, we need to increase media literacy and develop systems of accountability,” he says. “That could mean laws against deliberately producing disinformation and consequences for breaking them, as well as mechanisms to repair the harms caused as a result.”

Originally published by
Edmund L Andrews | October 13, 2020
HAI, Stanford University

Stanford HAI's mission is to advance AI research, education, policy and practice to improve the human condition. 

Read more…
Gold Level Contributor

In a proof-of-concept study, education and artificial intelligence researchers have demonstrated the use of a machine-learning model to predict how long individual museum visitors will engage with a given exhibit. The finding opens the door to a host of new work on improving user engagement with informal learning tools.

“Education is an important part of the mission statement for most museums,” says Jonathan Rowe, co-author of the study and a research scientist in North Carolina State University’s Center for Educational Informatics (CEI). “The amount of time people spend engaging with an exhibit is used as a proxy for engagement and helps us assess the quality of learning experiences in a museum setting. It’s not like school – you can’t make visitors take a test.”

“If we can determine how long people will spend at an exhibit, or when an exhibit begins to lose their attention, we can use that information to develop and implement adaptive exhibits that respond to user behavior in order to keep visitors engaged,” says Andrew Emerson, first author of the study and a Ph.D. student at NC State.

“We could also feed relevant data to museum staff on what is working and what people aren’t responding to,” Rowe says. “That can help them allocate personnel or other resources to shape the museum experience based on which visitors are on the floor at any given time.”

To determine how machine-learning programs might be able to predict user interaction times, the researchers closely monitored 85 museum visitors as they engaged with an interactive exhibit on environmental science. Specifically, the researchers collected data on study participants’ facial expressions, posture, where they looked on the exhibit’s screen and which parts of the screen they touched.

The data were fed into five different machine-learning models to determine which combinations of data and models resulted in the most accurate predictions.

“We found that a particular machine-learning method called ‘random forests’ worked quite well, even using only posture and facial expression data,” Emerson says.

The researchers also found that the models worked better the longer people interacted with the exhibit, since that gave them more data to work with. For example, a prediction made after a few minutes would be more accurate than a prediction made after 30 seconds. For context, user interactions with the exhibit lasted as long as 12 minutes.

“We’re excited about this, because it paves the way for new approaches to study how visitors learn in museums,” says Rowe. “Ultimately, we want to use technology to make learning more effective and more engaging.”

The paper, “Early Prediction of Visitor Engagement in Science Museums with Multimodal Learning Analytics,” will be presented at the 22nd ACM International Conference on Multimodal Interaction (ICMI ’20), being held online Oct. 25-29. The paper was co-authored by Nathan Henderson, a Ph.D. student at NC State; Wookhee Min and Seung Lee, research scientists at NC State’s CEI; James Minogue, an associate professor of teacher education and learning sciences at NC State; and James Lester, Distinguished University Professor of Computer Science and the director of CEI at NC State.

The work was done with support from the National Science Foundation under grant 1713545.

Originally published October 13, 2020 by
Jonathan RoweAndrew EmersonMatt ShipmanNC State University

 

Read more…
Standard

 

Image: Photo Hobby - Unsplash

Robotics technology is not new in military applications, as it has been used widely by armed forces across the world for many years. In today’s globalized world, robots in the military can perform various combat roles, including rescue task, explosive disarmament, fire support, reconnaissance, logistics support, lethal combat duties, and more. These robots can also be seen as an alternative to human soldiers, handling a broader range of combat tasks, from picking off snipers to targeting enemies’ areas with greater efficiency. Military robots can provide a backup during heavy artillery fire and lower the number of casualties. They can also map a potentially large hostile area by identifying a variety of threats with precision.

As military robots come in diverse shapes and sizes based on the requirement, MarketsandMarkets predicted that the military robot industry will reach US$30.83 billion by 2022, growing at a CAGR of 12.9 percent during the projected period of 2017-2022.

Here’s a look at the top most advanced military robots in the world changing the face of warfare.

 AVATAR III 

AVATAR III is a tactical robot from Robotex. It can be used by military SWAT teams to keep human soldiers safe that would normally perform this type of operation. The robot enhances the capabilities of law enforcement and first-responders by allowing them to safely and quickly inspect treacherous situations. The AVATAR III is completely customizable with plug-n-play payload bays, enabling users to build the robot to fit their needs.

DOGO

DOGO robot is an innovative tactical combat robot, armed with a 9 mm Glock pistol, created to serve as a watchdog for soldiers in the war field. Designed by General Robotics, this robot is the earthbound equivalent of the ubiquitous combat drone. The most interesting thing about DOGO is it weighs roughly 26 pounds and can be carried in one hand by a fully armed commando. Reportedly, DOGO was designed with input from the Israeli police’s counterterror unit and the Defense Ministry’s research and development directorate to combat terrorism.

RiSE

RiSE from Boston Robotics is an insect-like climbing robot that uses microclawed feet to nimbly scale textured surfaces, such as walls, fences, and trees. It is developed in collaboration with Boston Dynamics, Inc., Stanford University, Carnegie Mellon University, U.C. Berkeley, and Lewis & Clark University. The goal of the RiSE project is to create a bioinspired climbing robot with the unique ability to walk on land and climb on vertical surfaces. This project is funded by the DARPA Biodynotics Program. 

SAFFiR

SAFFiR (Shipboard Autonomous Firefighting Robot) is a 5feet 10inches military robot and weighs 143 pounds. Developed by researchers at Virginia Tech, the robot designed to extinguish fires that break out on naval ships. SAFFiR can’t stand without a tether, but it is capable of taking measured steps and handling a fire hose. Its unique mechanism design equips it with a superhuman range of motion to maneuver in complex spaces. The ultimate goal is for SAFFiR to work in tandem with Navy officers, not replace them.

MUTT

Stands for Multi-Utility Tactical Transport, MUTT is an unmanned ground vehicle that comes in two versions – wheeled and tracked. MUTT accompanies the fighters, making travel easier by decreasing the amount of equipment that they carry while crossing difficult terrain on foot. This autonomous war vehicle comes in three sizes: tracked, 6×6, and 8×8. The 8×8 MUTT is 112 inches long by 60 inches wide, carrying up to 1,200 pounds. It can provide up to 3,000 watts of power and travel for up to 60 miles on a single tank of gas.

Guardbot

Guardbot is an amphibious, surveillance robot that can roll on any terrain, including snow, sand, and dirt. Along with maneuvering on any terrain, this surveillance robot can even swim. Originally designed for missions to Mars, Guardbot is equipped with two surveillance cameras, a battery that can last for 25 hours, microphones and GPS, which allows it to be controlled via satellites as well as remotely. Its smaller version can help search underneath vehicles at security checkpoints.

Gladiator

The Gladiator Tactical Unmanned Ground Vehicle is designed to support Marine Corps conduct of Ship To Objective Maneuver (STOM). It uses a small-medium sized mobile robotic system to lessen risk and neutralize threats to Marines across the spectrum of conflict. Looks like a small tank, but it can perform scout/surveillance, NBC reconnaissance, direct fire, and personnel obstacle breaching missions in its basic configuration. Gladiator can provide day/night remote visual acuity similar to that of an individual Marine using current image intensifying or thermal devices.

Originally published by
Vivek Kumar | October 9, 2020
Analytics Insight

Read more…
Standard

New tools for helping with mental health issues are employing AI to increase automation and connect patients with health care resources. (Credit: Getty Images) 

The pandemic is a perfect storm for mental health issues. Isolation from others, economic uncertainty, and fear of illness can all contribute to poor mental health — and right now, most people around the world face all three. 

New research suggests that the virus is tangibly affecting mental health. Rates of depression and anxiety symptoms are much higher than normal. In some population groups, like students and young people, these numbers are almost double what they’ve been in the past. 

Some researchers are even concerned that the prolonged, unavoidable stress of the virus may result in people developing long-term mental health conditions — including depression, anxiety disorders and even PTSD, according to an account in Business Insider. Those on the front lines, like medical professionals, grocery store clerks and sanitation workers, may be at an especially high risk. 

Use of Digital Mental Health Tools with AI on the Rise  

Automation is already widely used in health care, primarily in the form of technology like AI-based electronic health records and automated billing tools, according to a blog post from ZyDoc, a supplier of medical transcription applications. It’s likely that COVID-19 will only increase the use of automation in the industry. Around the world, medical providers are adopting new tech, like self-piloting robots that act as hospital nurses. These providers are also using UV light-based cleaners to sanitize entire rooms more quickly. 

Digital mental health tools are also on the rise, along with fully automated AI tools that help patients get the care they need.  

The AI-powered behavioral health platform Quartet, for example, is one of several automated tools that aim to help diagnose patients, screening them for common conditions like depression, anxiety, and bipolar spectrum disorders, according to a recent account in AI Trends. Other software — like a new app developed by engineers at the University of New South Wales in Sydney, Australia — can screen patients for different mental health conditions, including dementia. With a diagnosis, patients are better equipped to find the care they need, such as from mental health professionals with in-depth knowledge of a particular condition.  

Another tool, an AI-based chatbot called Woebot, developed by Woebot Labs, Inc., uses brief daily chats to help people maintain their mental health. The bot is designed to teach skills related to cognitive behavioral therapy (CBT), a form of talk therapy that assists patients with identifying and managing maladaptive thought patterns.  

In April, Woebot Labs updated the bot to provide specialized COVID-19-related support in the form of a new therapeutic modality, called Interpersonal Psychotherapy (IPT), which helps users “process loss and role transition,” according to a press release from the company. 

Both Woebot and Quartet provide 24/7 access to mental health resources via the internet. This means that — so long as a person has an internet connection — they can’t be deterred by an inaccessible building or lengthy waitlist. 

New AI Tools Supporting Clinicians  

Some groups need more support than others. Clinicians working in hospitals are some of the most vulnerable to stress and anxiety. Right now, they’re facing long hours, high workloads, and frequent potential exposure to COVID. 

Developers and health care professionals are also working together to create new AI tools that will support clinicians as they tackle the challenges of providing care during the pandemic. 

One new AI-powered mental health platform, developed by the mobile mental health startup Rose, will gather real-time data on how clinicians are feeling via “questionnaires and free-response journal entries, which can be completed in as few as 30 seconds,” according to an account in Fierce Healthcare. The tool will scan through these responses, tracking the clinician’s mental health and stress levels. Over time, it should be able to identify situations and events likely to trigger dips in mental health or increased anxiety and tentatively diagnose conditions like depression, anxiety, and trauma. 

Front-line health care workers are up against an unprecedented challenge, facing a wave of new patients and potential exposure to COVID, according to Kavi Misri, founder and CEO of Rose. As a result, many of these workers may be more vulnerable to stress, anxiety and other mental health issues.  

“We simply can’t ignore this emerging crisis that threatens the mental health and stability of our essential workers – they need support,” stated Misri. 

Rose is also providing clinicians access to more than 1,000 articles and videos on mental health topics. Each user’s feed of content is curated based on the data gathered by the platform. 

Right now, Brigham and Women’s Hospital, the second-largest teaching hospital at Harvard, is experimenting with the technology in a pilot program. If effective, the tech could soon be used around the country to support clinicians on the front lines of the crisis. 

Mental health will likely stay a major challenge for as long as the pandemic persists. Fortunately, AI-powered experimental tools for mental health should help to manage the stress, depression and trauma that has developed from dealing with COVID-19. 

Read the source articles and information in Business Insider, a blog post from ZyDoc, in AI Trends,  press release from Woebot Labs, and in Fierce Healthcare

Originally posted written
Shannon Flynn | October 8, 2020
for aitrends

Shannon Flynn is a managing editor at Rehack, a website featuring coverage of a range of technology niches. 

Read more…
Gold Level Contributor

Image: NOAA - Unsplash

Engineers at the University of California San Diego have built a squid-like robot that can swim untethered, propelling itself by generating jets of water. The robot carries its own power source inside its body. It can also carry a sensor, such as a camera, for underwater exploration. 

The researchers detail their work in a recent issue of Bioinspiration and Biomimetics. 

“Essentially, we recreated all the key features that squids use for high-speed swimming,” said Michael T. Tolley, one of the paper’s senior authors and a professor in the Department of Mechanical and Aerospace Engineering at UC San Diego. “This is the first untethered robot that can generate jet pulses for rapid locomotion like the squid and can achieve these jet pulses by changing its body shape, which improves swimming efficiency.”

This squid robot is made mostly from soft materials such as acrylic polymer, with a few rigid, 3D printed and laser cut parts. Using soft robots in underwater exploration is important to protect fish and coral, which could be damaged by rigid robots. But soft robots tend to move slowly and have difficulty maneuvering.

The research team, which includes roboticists and experts in computer simulations as well as  experimental fluid dynamics, turned to cephalopods as a good model to solve some of these issues. Squid, for example, can reach the fastest speeds of any aquatic invertebrates thanks to a jet propulsion mechanism. 

Their robot takes a volume of water into its body while storing elastic energy in its skin and flexible ribs. It then releases this energy by compressing its body and generates a jet of water to propel itself. 

At rest, the squid robot is shaped roughly like a paper lantern, and has flexible ribs, which act like springs, along its sides. The ribs are connected to two circular plates at each end of the robot. One of them is connected to a nozzle that both takes in water and ejects it when the robot’s body contracts. The other plate can carry a water-proof camera or a different type of sensor. 

Engineers first tested the robot in a water testbed in the lab of Professor Geno Pawlak, in the UC San Diego Department of Mechanical and Aerospace Engineering. Then they took it out for a swim in one of the tanks at the UC San Diego Birch Aquarium at the Scripps Institution of Oceanography. 

They demonstrated that the robot could steer by adjusting the direction of the nozzle. As with any underwater robot, waterproofing was a key concern for electrical components such as the battery and camera.They clocked the robot’s speed at about 18 to 32 centimeters per second (roughly half a mile per hour), which is faster than most other soft robots. 

“After we were able to optimize the design of the robot so that it would swim in a tank in the lab, it was especially exciting to see that the robot was able to successfully swim in a large aquarium among coral and fish, demonstrating its feasibility for real-world applications,” said Caleb Christianson, who led the study as part of his Ph.D. work in Tolley’s research group. He is now a senior medical devices engineer at San Diego-based Dexcom.  

Researchers conducted several experiments to find the optimal size and shape for the nozzle that would propel the robot. This in turn helped them increase the robot’s efficiency and its ability to maneuver and go faster. This was done mostly by simulating this kind of jet propulsion, work that was led by Professor Qiang Zhu and his team in the Department of Structural Engineering at UC San Diego. The team also learned more about how energy can be stored in the elastic component of the robot’s body and skin, which is later released to generate a jet. 

Originally published
October 6, 2020 | By Ioana Patringenaru
US San Diego News

 

 
Read more…
Standard

What is machine learning?

(MF3d/E+/Getty Images) Machine learning is commonplace today. Any industry that deals with large amounts of data is effectively using machine learning to gain insights and knowledge.

Machine learning is all around us. It is how Netflix is able to suggest the next movie for you to watch. It is why ads on the internet seem to know what you are interested in. It is also how autocompletion is accurately able to predict the next word you might type.

But what exactly is machine learning, and how does it work?

Simply put, machine learning is a subset of artificial intelligence that involves the development of computer algorithms that access large amounts of data to create models for information. These models are then used to predict specific behavior. For example, if someone tends to watch a lot of movies involving racing and cars, a good recommendation might be a film on the history of cars or car restoration.

Machine learning consists of several key steps. The first step is gathering and sorting data, and then developing a model based on the data. This model is then trained, evaluated, tuned, and then used for predictions. As more data is collected and more predictions are made, the machine or algorithm continues to learn and improve its predictions.

The more data the machine has to base its predictions off of, the higher the probability that the the prediction is accurate. Thus, machine learning is the process of inferring or deducing new information based on existing data, and typically lots of it.

Machine learning is not a new concept. In fact, the term “machine learning” was coined in 1959 by Arthur Samuel partially based off of a checkers playing program. The program would choose the best next move to minimize the potential to lose the game. Due to the limitations of the computational hardware of the time, Samuel had to use a method of deriving the answers called alpha-beta pruning.

Machine learning today is much more technologically advanced. Computational resources are more powerful--and cheaper and more accessible. Computer memory also is larger, faster, and cheaper. As a result, machine learning algorithms are able to crunch complex mathematical equations incredibly fast.

Any industry that deals with large amounts of data is effectively using machine learning to gain insights and knowledge. The financial industry uses machine learning to help identify and prevent fraud. Google’s self-driving car is a technological innovation built off of machine learning. Many healthcare organizations are now using machine learning, along with various sensors and wearable devices to help identify potential issues and trends relating to medical conditions.

Machine learning will be a core part of technological advances, such as artificial intelligence and the internet of things, for the foreseeable future.

Originally published by
Cabe Atwell | October 6, 2020
Fierce Electronics

Read more…
Gold Level Contributor

A yellow underwater robot (left) finds its way to a mobile docking station to recharge and upload data before continuing a task. (Purdue University photo/Jared Pike)

WEST LAFAYETTE, Ind. — Robots can be amazing tools for search-and-rescue missions and environmental studies, but eventually they must return to a base to recharge their batteries and upload their data. That can be a challenge if your robot is an autonomous underwater vehicle (AUV) exploring deep ocean waters.

Now, a Purdue University team has created a mobile docking system for AUVs, enabling them to perform longer tasks without the need for human intervention.

The team also has published papers on ways to adapt this docking system for AUVs that will explore extraterrestrial lakes, such as those of Jupiter and Saturn’s moons.

“My research focuses on persistent operation of robots in challenging environments,” said Nina Mahmoudian, an associate professor of mechanical engineering. “And there’s no more challenging environment than underwater.”

Once a marine robot submerges in water, it loses the ability to transmit and receive radio signals, including GPS data. Some may use acoustic communication, but this method can be difficult and unreliable, especially for long-range transmissions. Because of this, underwater robots currently have a limited range of operation.

“Typically these robots perform a pre-planned itinerary underwater,” Mahmoudian said. “Then they come to the surface and send out a signal to be retrieved. Humans have to go out, retrieve the robot, get the data, recharge the battery and then send it back out. That’s very expensive, and it limits the amount of time these robots can be performing their tasks.”

Mahmoudian’s solution is to create a mobile docking station that underwater robots could return to on their own. A video describing this research is available on YouTube.

 

 

“And what if we had multiple docks, which were also mobile and autonomous?” she said. “The robots and the docks could coordinate with each other, so that they could recharge and upload their data, and then go back out to continue exploring, without the need for human intervention. We’ve developed the algorithms to maximize these trajectories, so we get the optimum use of these robots.”

A paper on the mission planning system that Mahmoudian and her team developed has been published in IEEE Robotics and Automation Letters. The researchers validated the method by testing the system on a short mission in Lake Superior.

“What’s key is that the docking station is portable,” Mahmoudian said. “It can be deployed in a stationary location, but it can also be deployed on autonomous surface vehicles or even on other autonomous underwater vehicles. And it’s designed to be platform-agnostic, so it can be utilized with any AUV. The hardware and software work hand-in-hand.”

Mahmoudian points out that systems like this already exist in your living room. “An autonomous vacuum, like a Roomba, does its vacuum cleaning, and when it runs out of battery, it autonomously returns to its dock to get recharged,” she said, “That’s exactly what we are doing here, but the environment is much more challenging.”

If her system can successfully function in a challenging underwater environment, then Mahmoudian sees even greater horizons for this technology.

“This system can be used anywhere,” she said. “Robots on land, air or sea will be able to operate indefinitely. Search-and-rescue robots will be able to explore much wider areas. They will go into the Arctic and explore the effects of climate change. They will even go into space.”

patent on this mobile underwater docking station design has been issued. The patent was filed through the Secretary of the U.S. Navy. This work is funded by the National Science Foundation (grant 19078610) and the Office of Naval Research (grant N00014-20-1-2085).

Originally published by
Jared Pike | October 6, 2020
Purdue University

Read more…
Silver Level Contributor

EPSRC prize winning photograph by Alexander James Spence, Author provided

Artificial intelligence seems to be making enormous advances. It has become the key technology behind self-driving carsautomatic translation systems, speech and textual analysis, image processing and all kinds of diagnosis and recognition systems. In many cases, AI can surpass the best human performance levels at specific tasks.

We are witnessing the emergence of a new commercial industry with intense activity, massive financial investment, and tremendous potential. It would seem that there are no areas that are beyond improvement by AI – no tasks that cannot be automated, no problems that can’t at least be helped by an AI application. But is this strictly true?

Theoretical studies of computation have shown there are some things that are not computable. Alan Turing, the brilliant mathematician and code breaker, proved that some computations might never finish (while others would take years or even centuries).

For example, we can easily compute a few moves ahead in a game of chess, but to examine all the moves to the end of a typical 80-move chess game is completely impractical. Even using one of the world’s fastest supercomputers, running at over one hundred thousand trillion operations per second, it would take over a year to get just a tiny portion of the chess space explored. This is also known as the scaling-up problem.

Early AI research often produced good results on small numbers of combinations of a problem (like noughts and crosses, known as toy problems) but would not scale up to larger ones like chess (real-life problems). Fortunately, modern AI has developed alternative ways of dealing with such problems. These can beat the world’s best human players, not by looking at all possible moves ahead, but by looking a lot further than the human mind can manage. It does this by using methods involving approximations, probability estimates, large neural networks and other machine-learning techniques.

But these are really problems of computer science, not artificial intelligence. Are there any fundamental limitations on AI performing intelligently? A serious issue becomes clear when we consider human-computer interaction. It is widely expected that future AI systems will communicate with and assist humans in friendly, fully interactive, social exchanges.

Theory of mind

Of course, we already have primitive versions of such systems. But audio-command systems and call-centre-style script-processing just pretend to be conversations. What is needed are proper social interactions, involving free-flowing conversations over the long term during which AI systems remember the person and their past conversations. AI will have to understand intentions and beliefs and the meaning of what people are saying.

This requires what is known in psychology as a theory of mind – an understanding that the person you are engaged with has a way of thinking, and roughly sees the world in the same way as you do. So when someone talks about their experiences, you can identify and appreciate what they describe and how it relates to yourself, giving meaning to their comments.

We also observe the person’s actions and infer their intentions and preferences from gestures and signals. So when Sally says, “I think that John likes Zoe but thinks that Zoe finds him unsuitable”, we know that Sally has a first-order model of herself (her own thoughts), a second-order model of John’s thoughts, and a third-order model of what John thinks Zoe thinks. Notice that we need to have similar experiences of life to understand this.

Physical learning

It is clear that all this social interaction only makes sense to the parties involved if they have a “sense of self” and can similarly maintain a model of the self of the other agent. In order to understand someone else, it is necessary to know oneself. An AI “self model” should include a subjective perspective, involving how its body operates (for example, its visual viewpoint depends upon the physical location of its eyes), a detailed map of its own space, and a repertoire of well understood skills and actions.

8007352289?profile=RESIZE_710x

AI needs a body to develop a sense of self. Phonlamai Photo/Shutterstock

That means a physical body is required in order to ground the sense of self in concrete data and experience. When an action by one agent is observed by another, it can be mutually understood through the shared components of experience. This means social AI will need to be realised in robots with bodies. How could a software box have a subjective viewpoint of, and in, the physical world, the world that humans inhabit? Our conversational systems must be not just embedded but embodied.

A designer can’t effectively build a software sense-of-self for a robot. If a subjective viewpoint were designed in from the outset, it would be the designer’s own viewpoint, and it would also need to learn and cope with experiences unknown to the designer. So what we need to design is a framework that supports the learning of a subjective viewpoint.

Fortunately, there is a way out of these difficulties. Humans face exactly the same problems but they don’t solve them all at once. The first years of infancy display incredible developmental progress, during which we learn how to control our bodies and how to perceive and experience objects, agents and environments. We also learn how to act and the consequences of acts and interactions.

Research in the new field of developmental robotics is now exploring how robots can learn from scratch, like infants. The first stages involve discovering the properties of passive objects and the “physics” of the robot’s world. Later on, robots note and copy interactions with agents (carers), followed by gradually more complex modelling of the self in context. In my new book, I explore the experiments in this field.

So while disembodied AI definitely has a fundamental limitation, future research with robot bodies may one day help create lasting, empathetic, social interactions between AI and humans.

Originally written by
Mark Lee,  Emeritus Professor in Computer Science, Aberystwyth University
for The Conversation | October 5, 2020

Read more…
Silver Level Contributor

Photo credit: Charles Deluvio.

A new study from North Carolina State University and Syracuse University assessed what would motivate people to use chatbots for mental health services in the wake of a mass shooting. The researchers found that users’ desire to help others with mental health problems was a more powerful driver than seeking help for their own problems.

“We saw a sharp increase in mass shootings in the U.S. in recent years, and that can cause increases in the need for mental health services,” says Yang Cheng, first author of the study and an assistant professor of communication at NC State. “And automated online chatbots are an increasingly common tool for providing mental health services – such as providing information or an online version of talk therapy. But there has been little work done on the use of chatbots to provide mental health services in the wake of a mass shooting. We wanted to begin exploring this area, and started with an assessment of what variables would encourage people to use chatbots under those circumstances.”

The researchers conducted a survey of 1,114 U.S. adults who had used chatbots to seek mental health services at some point prior to the study. Study participants were given a scenario in which there had been a mass shooting, and were then asked a series of questions pertaining to the use of chatbots to seek mental health services in the wake of the shooting. The survey was nationally representative and the researchers controlled for whether study participants had personal experiences with mass shootings.

The researchers found a number of variables that were important in driving people to chatbots to address their own mental health needs. For example, people liked the fact that chatbots were fast and easy to access, and they thought chatbots would be good sources of information. The study also found that people felt it was important for chatbots to be humanlike, because they would want the chatbots to provide emotional support.

But researchers were surprised to learn that a bigger reason for people to use chatbots was to help other people who were struggling with mental health issues.

“We found that the motivation of helping others was twice as powerful as the motivation of helping yourself,” Cheng says.

Helping others, in this context, would include talking to a chatbot in order to help a loved one experiencing mental illness from getting worse; finding ways to encourage the loved one to access the chatbot services; or to demonstrate to the loved one that the services are easy to use.

“Our study offers detailed insights into what is driving people to access mental health information on chatbot platforms after a disaster, as well as how they are using that information,” Cheng says. “Among other applications, these findings should be valuable for the programmers and mental healthcare providers who are responsible for developing and deploying these chatbots.”

The paper, “AI-Powered Mental Health Chatbots: Examining Users’ Motivations, Active Communicative Action, and Engagement after Mass-Shooting Disasters,” is published in the Journal of Contingencies and Crisis Management. The paper was co-authored by Hua Jiang of Syracuse. The work was done with support from a CUSE seed grant from Syracuse University.

Originally published by
Yang (Alice) Cheng, Matt Shipman | October 5, 2020
NC State University News

Read more…
Gold Level Contributor

AI of Growing Importance to Gaming Industry

The gambling industry is beginning to incorporate AI to enable advances, including in ways to try to help problem gamblers. (Credit: Getty Images) 

Operators of casinos and online games are incorporating AI in efforts ranging from maximizing profits to helping problem gamblers.  

The gaming industry is technically savvy, having integrated automation into its operations to gain efficiencies and offer conveniences to customers. Now AI is being applied to casinos and the gambling industry, in-person and online, enabling more advances such as allowing multiple users to play the same game at the same time from different locations.   

Other advantages include ability to track compliance with online gambling regulations, collection of data on gambling preferences to enable predictions and deliver customized service to customers, according to a recent account in LA Progressive.  

It might be difficult for operators to enhance the customer experience without the use of AI in the future, suggested a speaker at SBC Summit Barcelona – Digital, the Global Betting & Gaming Show, usually held in Barcelona but held online recently.  

For sure what we are focusing on is to increase our customer experience, the winning experience, and that leads to revenue and revenue on the lower end. I’m not seeing operators managing their operations without the help of AI in the near future. Talking about revenues, I think it’s too much over the next three years with everything on the internet,” stated Américo Loureiro, director of the casino firm Solverde, a casino and hotel firm, in an account in CasinoBeats

“Our plan is for the next three years to increase the AI on our operations and get benefits from this. We know that this is the very beginning, and we want to be on top of this because the ones that agree more with AI and manage AI will be the most successful operators.” he stated.  

Startup Rootz was formed in 2018 by internet gaming (igaming) professionals intent on building an online gaming platform. “It was most definitely a strategic decision to place AI in the center of thought when we started designing the platform,” stated Edvinas Subacius, chief data officer of Rootz, speaking at the SBC event. “We know that we can increase player turnover or spending up to five per cent by doing recommendation engines. At the same time, we know that we increase their lifetime value between ten per cent to 20% so four times more than actual spending by applying AI to manage our bonus cost and promotional values.” 

He suggested trying to weight the benefits of AI by a headcount measure of the number of operations the casino can get done per headcount, or how much additional revenue the headcount generates. Subacius stated that his company has 70 employees, “and we are running operations equal to other companies that have 300 to 500 employees. So the sooner you start with AI, and the closer it is to the heart of your mentality and your platform, it can create the efficiency of 100% or 1000% scale. It’s not 5% anymore.” 

The gaming industry has been collecting data from customers and market for years, and now the industry is positioned to use AI and machine learning to advance business goals, suggested Steven Paton, business solutions advisor for BMIT Technologies, a multi-site data center provider in Malta. “The next step is definitely the transitional phase of utilizing big data and machine learning to push that further over the next five years,” Paton stated at the SBC event.  

Norsk Tipping Constructively Works with Problem Gamblers 

The risk that AI will more effectively target problem gamblers is recognized by Norsk Tipping, Norway’s state-owned gambling company, which is working with a software provider on behavioral analysis to help identify problem gamblers.   

The 2.2 million customers of Norsk Tipping have passed identity checks and have gaming accounts, which track game-playing frequency and winnings received, enabling the company to set gaming limits if necessary. “This wealthy supply of data provides unique possibilities to exploit AI to prevent problem gambling. Norsk Tipping’s mandate is clear: the company shall act to prevent the negative consequences of gambling,” stated Tanja Sveen, advisor responsible for gambling for the company, on the blog of Norsk Tipping.  

Machine learning can help expose risky gaming patterns and personalize preventive measures to reduce risk gambling, she suggested.  

Norsk has been working with a gaming behavioral analysis tool from Playscan, a company founded to address issues around problem gambling. The analysis tool aims to expose risk-filled gaming patterns, provide the customer with feedback explaining the reasoning behind the risk assessment, and suggesting preventive measures where appropriate.  

The first Playscan model was based on a number of self-assessments completed by customers. Those were used to develop the second-generation model, by comparing customer data from the first period to their responses to a self-assessment questionnaire for the second period, in order to train the model. “More than 60,000 completed self-assessments were used to develop the second generation of the Playscan model,” Sveen wrote. 

Results so far have been positive. “The new model is clearly better than the previous one. It has a higher level of accuracy and the customer feedback shows a greater degree of agreement with the risk assessment,” she stated. 

Proactively Calling Problem Gamblers 

Norsk is using AI in an effort to proactively call—on the phone—customers with problem gambling habits. During the call, the customer is given facts on their gambling spending and the need for change is discussed. This often results in a reduction of the customer’s gaming limit.   

In order to identify which customers would most benefit from a call, the company set up a study on machine learning with the BI Norwegian Business School during the spring of 2018. The researchers used a sample dataset of 1,400 customers who had received a proactive call, a random selection from the 10,000 customers who had lost the most money through gaming in the last year. That gave the team a representative data set useful for creating a model. Customer’s theoretical loss 12 weeks before the call was compared with their spending 12 weeks after the call, to see if spending had decreased or increased.     

The standard procedure for machine learning was employed; the model development and the data evaluation was carried out using the automated machine learning tool called DataRobot. 

“The evaluation showed that we managed to develop several models with an ability to provide rather highly accurate predictions,” Sveen wrote. “To a great extent the models were able to identify the customers who made use of the proactive calls.” 

Read the source articles and material in LA ProgressiveCasinoBeats and on the blog of Norsk Tipping

Originally posted by
AI Trends Staff | October 1, 2020
aitrends

 

Read more…
Silver Level Contributor

A still from a U.S. Space Force recruiting ad showing a service member in what appears to be an astronaut helmet. (Space Force)

Since it was established in Dec. 2019 — and probably even before that — one question has plagued the U.S. Space Force: when will they send humans into orbit?

While Space Force officials have tried to keep the focus on what their personnel will do on the ground to support the nation’s space assets, they’ve done little to dampen speculation. The Space Force probably didn’t help itself when it released a recruiting ad earlier this year that seemingly implied its members would literally be going to space.

But for anyone joining the Space Force to be an astronaut, Maj. Gen. John Shaw has some potentially bad news.

“I think it will happen,” said Shaw during the AFWERX Engage Space event Sept. 29. “But I think it’s a long way off.”

Shaw would know. He’s been a key member of the lean staff standing up both the Space Force and U.S. Space Command, serving simultaneously as commander of the former’s Space Operations Command and the latter’s Combined Force Space Component Command. While Shaw sees humans in orbit as part of the military’s plans somewhere down the line, there are two big reasons why it’s not likely to happen soon:

“First, space isn’t really all that habitable for humans. We’ve learned that since our early space days,” he explained. “And the second is, we’re getting darned good at this robotics thing in space.”

“You know, the best robots that humans have ever created are probably satellites — either ones that explore other planets or operated within our own Earth/moon system. GPS satellites might be among those. They’re incredible machines, and we’re only getting better with machine learning and artificial intelligence. We’re going to have an awful lot of automated and autonomous systems operating in Earth and lunar orbit and solar orbit in the days and years to come doing national security space activity,” Shaw added.

In addition to satellites, the Space Force and the U.S. Air Force are investing in robotic capabilities that further preclude the need for humans in space.

Most notable is the Robotic Servicing of Geosynchronous Spacecraft (RSGS) program being run by the Defense Advanced Research Projects Agency. With RSGS, DARPA wants to develop a robotic arm that can be placed on a free flying spacecraft which can navigate up to satellites to conduct repairs, orbital adjustments, or even install new payloads.

DARPA is partnering with SpaceLogistics, a Northrop Grumman subsidiary who successfully began the first on orbit satellite life extension mission earlier this year. SpaceLogistics will provide the spacecraft, DARPA will provide the arm. DARPA hopes to launch that robotically enhanced vehicle into orbit in late 2022.

Closer to the ground, the Air Force Research Laboratory is building ROBOpilot, a robot that can fly planes. The robot completely replaces the need for human pilots—it can press pedals to activate brakes, pull on the yoke to steer, adjust the throttle, and even read the dashboard instruments to see where it is and where it’s going. Following a landing mishap that sidelined it for severla months, ROBOpilot returned to the skies for a 2.2 hour test flight Sept. 24.

The Space Force doesn’t necessarily need humans in space to conduct its missions. Take the secretive X-37b space plane, for instance. The unmanned vehicle is able to take off, carry host experiments into orbit, deploy satellites, and return to earth without humans on board.

But even as the military makes it less and less imperative to send humans into space, Shaw believes that it’s inevitable.

“At some point, yes, we will be putting humans into space," said Shaw. "They may be operating command centers somewhere in the lunar environment or someplace else that are continuing to operate an architecture that is largely perhaps autonomous.”

In July, the Sierra Nevada Corporation announced it had received a study contract for prototype orbital outposts—autonomous, free flying vehicles in low Earth orbit that will facilitate space experiments and demonstrations. Missions will include hosting payloads, supporting space assembly and manufacturing, microgravity experimentation, logistics, training, testing and evaluations. The outposts are expected to be on orbit within 24 months of the award. SpaceNews later confirmed that two other companies—Nanoracks and Arkisys—have also received study contracts.

While the expectation is that these orbital outposts will be unmanned for now, DIU hasn’t ruled out human occupants of future outposts. A DIU spokesperson told C4ISRNET in 2019 that the “the prototype will explore the military utility of exclusive DoD access to an unmanned orbital platform in order to perform experiments with no risk to human crew or other non-DOD payloads.” However, DIU has also noted that it would be interested in securing a “human rating” for future outposts.

So even if humans on orbit are not part of the military’s immediate plans, it remains a tantalizing possibility.

“At some point that will happen. I just don’t know when,” said Shaw. “And it’s anybody’s guess to pick the year when that happens.”

Originally published by
Nathan Strout | September 29, 2020
C4ISRNET

Read more…
Gold Level Contributor

7 Creative Uses of AI in Digital Payments

AI or Artificial Intelligence is known for streamlining processes securely, but when it comes to the digital payment solution, AI goes beyond streamlining and offering security. AI brings automation and enables users to monitor online payments. It’s interesting the jot down seven highly amazing uses of AI in the payment industry.

How AI Transforms Digital Payment Industry?

  1. Neural Technology Works Wonders

A self-driving or autonomous car is no longer a new concept, but have you ever thought that the technology used behind a self-driving car can be used in finding fraudulent card or loan applications? Yes, the neural network technology or Artificial Neural Network (ANN) is the technology behind the autonomous car. It imitates the human brain and has played a vital role in finding the stolen identities from the Equifax data breach and other incidents.

ID Analytics has launched a new fraud scoring system for new applications based on convolutional neural network technology. It is a much-advanced version of the existing system and provides banks or financial institutes with indicators on whether the fraud attempt is done from a first or a third-party source. The neural technology creates a 3D model of the person’s data that includes the address, phone number, and transaction history.

  1. Bot Attacks become Reality

The Fintech industry becomes more dependent on AI technology with every passing day. Whether we talk of enterprise-focused banking apps like Chime and Wave or AI-based chatbots that facilitate the users to get personalized notifications on spending and investments, the AI technology gains traction among entrepreneurs.

Here we are pointing toward the flip side of machine learning and AI bots. Fraudsters utilize AI-powered bots to commit online fraud that becomes a big challenge for various industry sectors including the travel industry. The travel or tourism industry witnesses various types of online and offline fraud attempts especially during the holiday season. Many criminals want to exploit the situation arises because of the surge in travel volume.

Many cybercriminals use bots to reserve seats on flights to cause a wrongful increase in the price of unsold seats. Travelers and tour companies have to witness various challenges arising due to the fraudulent use of bots.

  1. Payments and Social Media Fusion

After seeing the dark side of AI, let’s go through an exciting opportunity for a digital payment solution. Yes, it’s a fusion of payments and social media. Facebook, Alibaba, and several other companies have made the most of fusion by bringing payment solutions through messenger apps. Amazon has now followed suit by working on a messaging platform known as Anytime.

We can certainly expect that the term “conversational commerce” will gain ground in the fintech and eCommerce segments alike over the period. Days are not far away when conversational commerce will be broadly adopted by people across the world. Chris Messina has already predicted the bright future of conversational commerce back in 2016. Consumers will utilize all means like chat, messaging, and even voice for making online transactions. AI technology will play a crucial role in making this possible.

In the near future, the lines between the human agent and a computer bot will be blurred as the users will not find the difference between them thanks to the advancing AI technology.

  1. AI Bots on Facebook Messenger

Financial companies and banks have gradually started deploying AI-based bots on Facebook Messenger to handle payments and offer personalized customer services. American Express (AmEx) is one such company that has a dedicated chatbot on Messenger for sending transaction notifications and benefit reminders to users. Every time the user buys anything, the dedicated chatbot sends a notification regarding the transactions.

Another version of AmEx chatbot enables users to add a card while linking an AmEx account to the user’s Facebook account. The card’s credentials are then stored with the user’s Facebook account to facilitate them to transact on the social network. We will have more intelligent and powerful AI bots in the future that will perform various tasks on Facebook Messenger and other social media channels. Simply put, the presence of AI bots on Facebook will boost the concept of conversational commerce.

  1. AI to Make Users Financially Healthy

This is one of the biggest benefits of AI. A fintech startup Douugh has developed an AI-based virtual assistant to help its consumers regain financial health. Sophie, as Douugh has named it, can assist consumers to reduce their credit card debt and student loan while enabling them to make better savings decisions. The virtual assistant also runs diagnostics on the financial positions of customers and manage their spending goals while handling payment-related tasks like bill payment and track payments. In a way, Douugh has made a giant leap toward offering a smart bank account to its customers.

AI technology is designed to analyze the user’s or customer’s behaviors and making predictions. When it is implemented into the fintech sector, it keeps on analyzing the data regarding the customer’s spending pattern and saving habits. In a way, it can help users maintain a subtle balance between the loan and deposit while making them financially healthy. In the coming years, many fintech companies and the BFSI sector as a whole will leverage the benefits of AI technology to provide 360-degree financial solutions.

  1. Bot Turns Banker

Singapore’s largest bank DBS has utilized the concept of conversational AI to take the customer experience to a new level. This approach enables the bank to manage customers’ accounts and facilitates them to initiate payments. The bank has taken enough care to make the interactions natural so that its customers do not realize that they are talking to a bot and not a human being.

Kai, the AI-powered bot of DBS can do more for the bank’s customers as compared to agents in customer centers. For example, if the customer asks about the amount, they spent on groceries last month, the agents might take some time to get the data, but Kai gives answers immediately.

Many other banks will also come up with robust and reliable bots in the coming years. On one hand, such bots will relieve the pressure on bankers, and on the other hand, customers can get improved services on a 24/7 basis.

  1. AI Makes Stores Intelligent

It’s time to make the walls and ceilings of your brick-and-mortar stores. Shanghai’s self-driving supermarket Moby has actually implemented this futuristic approach in which the supermarket ‘recognizes’ customers through a mobile app. What’s more, the store is solar-powered and it has a holographic greeter as staff.

Though this concept at its nascent phase, it has a vast scope and it offers a lot of opportunities for the retailers. Talking about the reason why Moby store has experimented with this concept in Shanghai, the store wanted to maintain a retail presence in the areas where the economy cannot support a permanent grocery store. Here, though the AI technology takes care of the customer service and delivery process, the autonomous store, help from humans is still necessary to return unsold items to a warehouse.

Amazon Go is a more efficient example of a concept store. It has sensors throughout the store instead of holographic greeters. Sensors can readily detect what customers are purchasing and which payment method is used. In other words, AI-based sensors simply collect and analyze the data related to the user behavior.

In a nutshell, AI is a game-changer in the digital payment industry.  As technology gets mainstreamed in the sector, we will have innovative and creative uses of AI technology.

Originally published by
Robert Jackson | September 30, 2020
Readwrite

Read more…
Gold Level Contributor

First AI image from space with HyperScout

Image : Cosine

For the first time in history an image was processed in space using artificial intelligence. The image was processed by the tailored artificial intelligence hardware of HyperScout 2, a miniaturized Earth observation instrument that is developed under the leadership of cosine. The deep neural network algorithm identified the clouds in an image of part of the Earth’s surface. The capability to process images using artificial intelligence on a satellite opens up possibilities for a large number of applications.

The premier on artificial intelligence in space was announced today by Josef Aschbacher, director of he Earth Observation program of the European Space Agency, at the opening of the Phi-Week 2020 symposium. The first HyperScout 2 instrument, carrying the Ф-sat-1 artificial intelligence (AI) experiment, was launched into space on 3 September 2020 from the Guiana Space Center in Kourou. The HyperScout 2 instrument is on board one of the two nanosatellites of the FSSCat mission that monitors sea ice and soil moisture in support of the Copernicus Land and Marine Environment Services, and was made possible by ESA’s InCubed program.

Integrated into the HyperScout 2 instrument is a Myriad 2 Vision Processing Unit (VPU) from Intel. This allows the instrument to process the images with machine learning algorithms, without requiring more power than available on a nanosatellite. The identification of clouds in an image from Earth is a first demonstration of the capabilities of an AI system on board an Earth observation satellite. The AI algorithm is trained on ground using machine learning on synthetic as well as HyperScout data, which includes properties of the image that are invisible to the human eye due to the 45 different color bands in the visible and infrared spectrum. The resulting algorithms are implemented on the dedicated AI hardware to analyze the images extremely efficiently on board.

Applied to Earth – fire and water

The capability to process spectral images in space using AI makes a large range of applications possible. One of the potential applications is wildfire management. High risk fire zones can be mapped, alerts can be triggered and the development and spreading of an eventual wildfire can be monitored.

There are also many possible benefits for agriculture, such as crop yield prediction, indicating how much crop could be harvested and when to harvest. Problem areas can be identified, alerts can be triggered and the situation analyzed. This makes it possible to take action, such as optimizing irrigation, adjusting levels of fertilization and targeted use of minimal levels of pesticides.

Being a state-of-the-art miniaturized instrument, HyperScout 2 can be flown on a small satellite, which can be launched in larger numbers even on a single rocket. By using a constellation consisting of multiple HyperScout instruments, observations can be made several times per week or even per day, so that alerts can be sent and action can be taken in time. This can also be of great benefit for environmental monitoring, water quality, air quality, deforestation and change detection.

Next steps

cosine is currently organizing the development of several of these applications with partners and clients. ‘’Data that is acquired by the HyperScout 2 instrument can be combined with data from the HyperScout 1 instrument, which has been in orbit for almost 3 years on the GOMX-4B satellite. In addition, a version of the AI algorithms developed for HyperScout 2 can be uploaded to the GPU, the graphic processing unit on HyperScout 1This presents a unique opportunity for partners to work with cosine to develop applications using distributed AI on both HyperScout 1 and HyperScout 2’’, explains Marco Esposito, business unit manager Remote Sensing at cosine.

cosine is also looking for partners and clients to extend the use of on-board data processing using AI to other instrument lines cosine is developing. These include bathimetry, imaging polarimetry for cloud and aerosol assessment, spectroscopic imaging for air quality, infrared imaging for agricultural monitoring and LIDAR for height mapping of land, vegetation, buildings and water levels.

Team effort

This world’s first was achieved in cooperation with and support from many partners.

HyperScout 1 was developed by cosine (NL) with consortium partners S&T (NO), TU Delft (NL), VDL (NL) and VITO (BE), with funding through the ESA GSTP program, with contributions from the Dutch, Belgian and Norwegian national space organizations: Netherlands Space Office, BELSPO and Norsk Romsenter.

HyperScout 2 was developed by cosine Remote Sensing (NL) supported by the InCubed and Earth Observation programs of ESA.

The FSSCat consortium consists of DEIMOS Engenharia (PT), Universitat Politècnica de Catalunya (ES), Golbriak Space (ET), Tyvak International (IT) and cosine Remote Sensing (NL).

The Ф-sat-1 consortium comprises cosine Remote Sensing (NL), University of Pisa (IT), Sinergise (SI) and Ubotica (IE), supported by ESA’s Earth Observation Ф-lab and GSTP fly element.

Originally published by
Cosine | September 28, 2020

Read more…

JAAGNet AI Feed

AI World Government - Virtual Event

  • Description:

    AI World Government – Virtual Event
    https://www.aiworldgov.com/

    Join more than 100 Government Agencies at AI World Government Virtual

    AI World Government provides a comprehensive three-day virtual forum to educate public sector agencies on proven strategies and tactics to deploy AI and cognitive technologies. With AI technology at the forefront of our everyday lives, there are significant efforts…

  • Created by: Kathy Jones
  • Tags: ai, government, ai world government

AI and Big Data Expo Europe 2020 - Hybrid Event

AI For Good Summit 2021 - Seattle

JAAGNet AI Blog Archive

See Original | Powered by elink

JAAGNet Channel AI Playlist