cross pond high tech
159.9K views | +0 today
Follow
cross pond high tech
light views on high tech in both Europe and US
Your new post is loading...
Your new post is loading...
Scooped by Philippe J DEWOST
Scoop.it!

Watch a Robot AI Beat a World Class Curling Competitor

On the ice, a machine-learning system often triumphed over high-level South Korean players

Artificial intelligence still needs to bridge the “sim-to-real” gap. Deep-learning techniques that are all the rage in AI log superlative performances in mastering cerebral games, including chess and Go, both of which can be played on a computer. But translating simulations to the physical world remains a bigger challenge.

A robot named Curly that uses “deep reinforcement learning”—making improvements as it corrects its own errors—came out on top in three of four games against top-ranked human opponents from South Korean teams that included a women’s team and a reserve squad for the national wheelchair team. (No brooms were used).

One crucial finding was that the AI system demonstrated its ability to adapt to changing ice conditions. “These results indicate that the gap between physics-based simulators and the real world can be narrowed,” the joint South Korean-German research team wrote in Science Robotics on September 23.

Philippe J DEWOST's insight:

Curling, in its human version, is all about cooperation. How will broom holders cooperate with a robotic launcher as it seems it doesn't need help anymore ? It this still curling after all ?

No comment yet.
Scooped by Philippe J DEWOST
Scoop.it!

AWS launches its custom Inferentia AI chips

AWS launches its custom Inferentia AI chips | cross pond high tech | Scoop.it

At its re:Invent conference, AWS today announced the launch of its Inferentia chips, which it initially announced last year. These new chips promise to make inferencing, that is, using the machine learning models you pre-trained earlier, significantly faster and cost effective.

As AWS CEO Andy Jassy noted, a lot of companies are focusing on custom chips that let you train models (though Google and others would surely disagree there). Inferencing tends to work well on regular CPUs, but custom chips are obviously going to be faster. With Inferentia, AWS offers lower latency and three times the throughput at 40% lower cost per inference compared to a regular G4 instance on EC4.

The new Inf1 instances promise up to 2,000 TOPS and feature integrations with TensorFlow, PyTorch and MXNet, as well as the ONNX format for moving models between frameworks. For now, it’s only available in the EC2 compute service, but it will come to AWS’s container services and its SageMaker machine learning service soon, too.

Philippe J DEWOST's insight:

Amazon continues going vertical with custom AI chip design made available in its cloud offerings.

Philippe J DEWOST's curator insight, December 9, 2019 4:04 AM

La puissance de calcul est un des leviers de la puissance tout court - suite : même les libraires se mettent au design propriétaire de processeurs (et celui-ci est dédié à l'IA). On attend toujours le processeur de la FNAC ou le GPU de Cdiscount ... 

Scooped by Philippe J DEWOST
Scoop.it!

Hyderabad based Fireflies.ai, founded by MIT & Microsoft alumni, raises $5m to put a voice assistant in every meeting

Hyderabad based Fireflies.ai, founded by MIT & Microsoft alumni, raises $5m to put a voice assistant in every meeting | cross pond high tech | Scoop.it

How Fireflies.ai works? ​Users can connect their Google or Outlook calendars with Fireflies and have our AI system capture meetings in real-time across more than a dozen different web-conferencing platforms like Zoom, Google Meet, Skype, GoToMeeting, Webex, and many ​more ​systems. These meetings are then indexed, transcribed, and made searchable inside the Fireflies dashboard. You can comment, annotate key moments, and automatically extract relevant information around numerous topics like the next steps, questions, and red flags.

Instead of spending time frantically taking notes in meetings, Fireflies users take comfort knowing that shortly after a meeting they are provided with a transcript of the conversation and an easy way to collaborate on the project going forward.

Fireflies can also sync all this vital information back into the places where you already work thanks to robust integrations with Slack, Salesforce, Hubspot, and other platforms.

Fireflies.ai is the bridge that helps data flow seamlessly from your communication systems to your system of records.

This approach is possible today because of major technological changes over the last 5 years in the field of machine learning. Fireflies leverage recent enhancements in Automatic Speech Recognition (ASR), natural language processing (NLP), and neural nets to create a seamless way for users to record, annotate, search, and share important moments from their meetings.

Who is Fireflies for? ​The beauty of Fireflies is that it’s been adopted by people in different roles across organizations big and small:

  • Sales managers​ use Fireflies to review their reps’ calls at lightning speed and provide on the spot coaching
  • Marketers ​create key customer soundbites from calls to use in their campaigns.
  • Recruiters ​no longer worry about taking hasty notes and instead spend more time paying attention to candidates during interviews.
  • Engineers ​refer back to specific parts of calls using our smart search capabilities to make everyone aware of the decisions that were finalized.
  • Product managers and executives​ rely on Fireflies to document knowledge and important initiatives that are discussed during all-hands and product planning meetings on how to get access ​Fireflies have a free tier for individuals and teams to easily get started. For more advanced capabilities like augmented call search, more storage, and admin controls, we offer different tiers for growing teams and enterprises. You can learn more about our pricing and tiers by going to fireflies.ai/pricing.

 

Philippe J DEWOST's insight:

Et si le compte-rendu d'une réunion était automatique ? Et si la distribution des décisions prises et leur suivi l'étaient aussi ?

Plus besoin de taper sur son clavier et de polluer le meeting, plus besoin d'y passer un temp précieux...

C'est la promesse de cette nouvelle application à base d'Intelligence artificielle (lire : de reconnaissance automatisée de contenu et de contexte).

Restons cependant prudents ; la dictée vocale est un fantasme régulièrement déçu depuis les années 1990 et Dragon Dictate sur PC, puis les années 2009 et le scandale SpinVox sur mobile. Désormais les réserves se porteront plus sur l'arbitrage entre vie privée et efficacité, et la partie n'est pas nécessairement gagnée.

On peut au moins reconnaître à Firefly.ai le mérite de s'attaquer de nouveau à la reconnaissance vocale...

Philippe J DEWOST's curator insight, December 2, 2019 3:27 AM

What if meeting notes were automatically generated and made available shortly after the conference call ? What if action items were assigned too ?

No more need for post processing, nor in meeting typing pollution : here is #AI (read "automated pattern detection and in context recognition") 's promised made by Firefly.

History reminds us how cautiously we shall face the longstanding fantasy of voice dictation (not speaking here of voice assistants) : Dragon Dictate in the 1990's never lived up to the promise, not did 

SpinVox in 2009 (it ended in tears). Now with growing concerns on the privacy vs. convenience balance, war is still not over.

Scooped by Philippe J DEWOST
Scoop.it!

McDonald’s acquires Apprente to bring voice technology to drive-thrus

McDonald’s acquires Apprente to bring voice technology to drive-thrus | cross pond high tech | Scoop.it

McDonald’s is increasingly looking at tech acquisitions as a way to reinvent the fast-food experience. Today, it’s announcing that it’s buying Apprente, a startup building conversational agents that can automate voice-based ordering in multiple languages.

If that sounds like a good fit for fast-food drive thru, that’s exactly what McDonald’s leadership has in mind. In fact, the company has already been testing Apprente’s technology in select locations, creating voice-activated drive-thrus (along with robot fryers) that it said will offer “faster, simpler and more accurate order taking.”

McDonald’s said the technology also could be used in mobile and kiosk ordering. Presumably, besides lowering wait times, this could allow restaurants to operate with smaller staffs.

Earlier this year, McDonald’s acquired online personalization startup Dynamic Yield for more than $300 million, with the goal of creating a drive-thru experience that’s customized based on things like weather and restaurant traffic. It also invested in mobile app company Plexure.

Now the company is looking to double down on its tech investments by creating a new Silicon Valley-based group called McD Tech Labs, with the Apprente team becoming the group’s founding members, and Apprente co-founder Itamar Arel becoming vice president of McD Tech Labs. McDonald’s said it will expand the team by hiring more engineers, data scientists and other tech experts.

Philippe J DEWOST's insight:

Voice Activated BigMac is getting closer as McDonald's enters the conversational, voice driven AI space.

And instead of partnering with suppliers, they do this by acquiring technology startups and integrating them in Silicon Valley based McD Tech Labs.

No comment yet.
Scooped by Philippe J DEWOST
Scoop.it!

AI in Law Enforcement Needs Clear Oversight

In the film Minority Report, mutants predict future crimes, allowing police to swoop in before they can be committed. In China, stopping “precrime” with algorithms is a part of law enforcement, using…
Philippe J DEWOST's insight:
One Camera per 100 citizens
No comment yet.
Scooped by Philippe J DEWOST
Scoop.it!

Facebook's AI can convert one singer's voice into another

Facebook's AI can convert one singer's voice into another | cross pond high tech | Scoop.it

AI can generate storyboard animations from scripts, spot potholes and cracks in roads, and teach four-legged robots to recover when they fall. But what about adapting one person’s singing style to that of another? Yep — it’s got that down pat, too. In a paper published on the preprint server Arxiv.org (“Unsupervised Singing Voice Conversion“), scientists at Facebook AI Research and Tel Aviv University describe a system that directly converts audio of one singer to the voice of another. All the more impressive, it’s unsupervised, meaning it’s able to perform the conversion from unclassified, unannotated data it hasn’t previously encountered.

The team claims that their model was able to learn to convert between singers from just 5-30 minutes of their singing voices, thanks in part to an innovative training scheme and data augmentation technique.

“[Our approach] could lead, for example, to the ability to free oneself from some of the limitations of one’s own voice,” the paper’s authors wrote. “The proposed network is not conditioned on the text or on the notes [and doesn’t] require parallel training data between the various singers, nor [does it] employ a transcript of the audio to either text … or to musical notes … While existing pitch correction methods … correct local pitch shifts, our work offers flexibility along the other voice

Philippe J DEWOST's insight:

The End of The Voice ?

No comment yet.
Scooped by Philippe J DEWOST
Scoop.it!

The first AI-generated textbook shows how robot writers are good at compiling peer-reviewed research papers

The first AI-generated textbook shows how robot writers are good at compiling peer-reviewed research papers | cross pond high tech | Scoop.it

Academic publisher Springer Nature has unveiled what it claims is the first research book generated using machine learning.

The book, titled Lithium-Ion Batteries: A Machine-Generated Summary of Current Research, isn’t exactly a snappy read. Instead, as the name suggests, it’s a summary of peer-reviewed papers published on the topic in question. It includes quotations, hyperlinks to the work cited, and automatically generated references contents. It’s also available to download and read for free if you have any trouble getting to sleep at night.

“a new era in scientific publishing”

While the book’s contents are soporific, the fact that it exists at all is exciting. Writing in the introduction, Springer Nature’s Henning Schoenenberger (a human) says books like this have the potential to start “a new era in scientific publishing” by automating drudgery.

Schoenenberger points out that, in the last three years alone, more than 53,000 research papers on lithium-ion batteries have been published. This represents a huge challenge for scientists who are trying to keep abreast of the field. But by using AI to automatically scan and summarize this output, scientists could save time and get on with important research.

“This method allows for readers to speed up the literature digestion process of a given field of research instead of reading through hundreds of published articles,” writes Schoenenberger. “At the same time, if needed, readers are always able to identify and click through to the underlying original source in order to dig deeper and further explore the subject.”

Although the recent boom in machine learning has greatly improved computers’ capacity to generate the written word, the output of these bots is still severely limited. They can’t contend with the long-term coherence and structure that human writers generate, and so endeavors like AI-generated fiction or poetry tend to be more about playing with formatting than creating compelling reading that’s enjoyed on its own merits.

Philippe J DEWOST's insight:

Artifical Intelligence can now write research books. When shall we expect a book about #AI itself ?

No comment yet.
Scooped by Philippe J DEWOST
Scoop.it!

A.I.-generated text is supercharging fake news. There are now A.I based countermeasures

A.I.-generated text is supercharging fake news. There are now A.I based countermeasures | cross pond high tech | Scoop.it

“We believe that machines and humans excel at detecting fundamentally different aspects of generated text,” Sebastian Gehrmann, a Ph.D. candidate in Computer Science at Harvard, told Digital Trends. “Machine learning algorithms are great at picking up statistical patterns such as the ones we see in GLTR. However, at the moment machines do not actually understand the content of a text. That means that algorithms could be fooled by completely nonsensical text, as long as the patterns match the detection. Humans, on the other hand, can easily tell when a text does not make any sense, but cannot detect the same patterns we show in GLTR.”

Philippe J DEWOST's insight:

Will AI ultimately discriminate between AI generated and human generated text ? GLTR vs. GTP2

No comment yet.
Scooped by Philippe J DEWOST
Scoop.it!

AI spots legal problems with tech T&Cs in GDPR research project

AI spots legal problems with tech T&Cs in GDPR research project | cross pond high tech | Scoop.it

Technology is the proverbial double-edged sword. And an experimental European research project is ensuring this axiom cuts very close to the industry’s bone indeed by applying machine learning technology to critically sift big tech’s privacy policies — to see whether AI can automatically identify violations of data protection law.

The still-in-training privacy policy and contract parsing tool — which is called ‘Claudette‘: Aka (automated) clause detector — is being developed by researchers at the European University Institute in Florence.

They’ve also now got support from European consumer organization BEUC — for a ‘Claudette meets GDPR‘ project — which specifically applies the tool to evaluate compliance with the EU’s General Data Protection Regulation.

Early results from this project have been released today, with BEUC sayingthe AI was able to automatically flag a range of problems with the language being used in tech T&Cs.

The researchers set Claudette to work analyzing the privacy policies of 14 companies in all — namely: Google, Facebook (and Instagram), Amazon, Apple, Microsoft, WhatsApp, Twitter, Uber, AirBnB, Booking, Skyscanner, Netflix, Steam and Epic Games — saying this group was selected to cover a range of online services and sectors.

And also because they are among the biggest online players and — I quote — “should be setting a good example for the market to follow”. Ehem, should.

Philippe J DEWOST's insight:

When Claudette meets GDPR , we get an extremely interesting encounter between Artificial Intelligence and EU policy.

Richard Platt's curator insight, August 22, 2018 12:22 PM

Technology is the proverbial double-edged sword. And an experimental European research project is ensuring this axiom cuts very close to the industry’s bone indeed by applying machine learning techniques to critically sift big tech’s privacy policies — to see whether AI can automatically identify violations of data protection law.  The still-in-training privacy policy and contract parsing tool — which is called ‘Claudette‘: Aka (automated) clause detector — is being developed by researchers at the European University Institute in Florence.  They’ve also now got support from European consumer organization BEUC — for a ‘Claudette meets GDPR‘ project — which specifically applies the tool to evaluate compliance with the EU’s General Data Protection Regulation.  Early results from this project have been released today, with BEUC saying the AI was able to automatically flag a range of problems with the language being used in tech T&Cs.

The researchers set Claudette to work analyzing the privacy policies of 14 companies in all — namely: Google, Facebook (and Instagram), Amazon, Apple, Microsoft, WhatsApp, Twitter, Uber, Airbnb, Booking, Skyscanner, Netflix, Steam and Epic Games — saying this group was selected to cover a range of online services and sectors.  And also because they are among the biggest online players and — I quote — “should be setting a good example for the market to follow”. Ehem, should.   The AI analysis of the policies was carried out in June after the update to the EU’s data protection rules had come into force. The regulation tightens requirements on obtaining consent for processing citizens’ personal data by, for example, increasing transparency requirements — basically requiring that privacy policies be written in clear and intelligible language, explaining exactly how the data will be used, in order that people can make a genuine, informed choice to consent (or not consent).  In theory, all 15 parsed privacy policies should have been compliant with GDPR by June, as it came into force on May 25. However, some tech giants are already facing legal challenges to their interpretation of ‘consent’. And it’s fair to say the law has not vanquished the tech industry’s fuzzy language and logic overnight. Where user privacy is concerned, old, ugly habits die hard, clearly.  But that’s where BEUC is hoping AI technology can help.  It says that out of a combined 3,659 sentences (80,398 words) Claudette marked 401 sentences (11.0%) as containing unclear language, and 1,240 (33.9%) containing “potentially problematic” clauses or clauses providing

Scooped by Philippe J DEWOST
Scoop.it!

AI could get 100 times more energy-efficient with IBM’s new artificial synapses

AI could get 100 times more energy-efficient with IBM’s new artificial synapses | cross pond high tech | Scoop.it
Neural networks are the crown jewel of the AI boom. They gorge on data and do things like transcribe speech or describe images with near-perfect accuracy (see “10 breakthrough technologies 2013: Deep learning”). The catch is that neural nets, which are modeled loosely on the structure of the human brain, are typically constructed in software rather than hardware, and the software runs on conventional computer chips. That slows things down. IBM has now shown that building key features of a neural net directly in silicon can make it 100 times more efficient. Chips built this way might turbocharge machine learning in coming years. The IBM chip, like a neural net written in software, mimics the synapses that connect individual neurons in a brain. The strength of these synaptic connections needs to be tuned in order for the network to learn. In a living brain, this happens in the form of connections growing or withering over time. That is easy to reproduce in software but has proved infuriatingly difficult to
Philippe J DEWOST's insight:
The human brain consumes 4.2 g of glucose per hour. Neural networks are trying to catch up and silicon might be the next step with a 100x efficiency factor
No comment yet.
Scooped by Philippe J DEWOST
Scoop.it!

This is a Rifle : Fooling Neural Networks in the Physical World

This is a Rifle : Fooling Neural Networks in the Physical World | cross pond high tech | Scoop.it

We’ve developed an approach to generate 3D adversarial objects that reliably fool neural networks in the real world, no matter how the objects are looked at.

 

Neural network based classifiers reach near-human performance in many tasks, and they’re used in high risk, real world systems. Yet, these same neural networks are particularly vulnerable to adversarial examples, carefully perturbed inputs that cause targeted misclassificatio

Philippe J DEWOST's insight:

The spirit of Magritte hides in neural networks : this team has been printing 3D objects that consistently fool machine vision object classifiers. A turtle becomes a rifle, while a cat is consistently recognized as guacamole.

This opens by the way a huge field in hide & seek and camouflage...

No comment yet.
Scooped by Philippe J DEWOST
Scoop.it!

Delphi acquires autonomous vehicle software supplier NuTonomy in $450 million deal

Delphi acquires autonomous vehicle software supplier NuTonomy in $450 million deal | cross pond high tech | Scoop.it
Delphi Automotive said it plans to acquire Boston-based autonomous vehicle software supplier NuTonomy in a deal that could be valued at $450 million.The acquisition, which is expected to close before the end of this year, will nearly double Delphi's 100-plus automated driving team with NuTonomy's 100 employees, including 70 engineers and scientists, the company said in a news release.NuTonomy will continue to operate in Boston, alongside Delphi's team in Boston, as well as in Delphi offices in Singapore; Pittsburgh; Santa Monica, Calif.; and in Silicon Valley in California.Glen De Vos, chief technology officer for Delphi, said the acquisition of NuTonomy allows Delphi access to the commercial truck market."We think this is the tip of the spear for automated driving," De Vos said Tuesday in a conference call with reporters. "This dramatically accelerates our penetration in this marketplace."
Philippe J DEWOST's insight:
Meanwhile, in Europe, ...
No comment yet.
Scooped by Philippe J DEWOST
Scoop.it!

A computer was asked to predict which start-ups would be successful. The results were astonishing

A computer was asked to predict which start-ups would be successful. The results were astonishing | cross pond high tech | Scoop.it

When a magazine challenged a technology company to use AI to pick 50 unheard of companies that were set to flourish, the experiment yielded dramatic results.

In 2009, Ira Sager of Businessweek magazine set a challenge for Quid AI's CEO Bob Goodson: programme a computer to pick 50 unheard of companies that are set to rock the world.

The domain of picking “start-up winners” was - and largely still is - dominated by a belief held by the venture capital (VC) industry that machines do not play a role in the identification of winners. Ironically, the VC world, having fuelled the creation of computing, is one of the last areas of business to introduce computing to decision-making.

Nearly eight years later, the magazine revisited the list to see how “Goodson plus the machine” had performed. The results surprised even Goodson: Evernote, Spotify, Etsy, Zynga, Palantir, Cloudera, OPOWER – the list goes on. The list featured not only names widely known to the public and leaders of industries, but also high performers such as Ibibo, which had eight employees in 2009 when selected and now has $2 billion annual sales as the top hotel booking site in India. Twenty percent of the companies chosen had reached billion-dollar valuations.

To contextualize these results, Bloomberg Businessweek turned to one of the leading “fund of funds” in the US, which has been investing in VC funds since the 1980s and has one of the richest data sets available on actual company performance and for benchmarking VC portfolio performance.

The fund of funds was not named for compliance reasons, but its research showed that, had the 50 companies been a VC portfolio, it would have been the second-best-performing fund of all time. Only one fund has ever chosen better, which did most of its investments in the late 1990s and rode the dotcom bubble successfully. Of course, in this hypothetical portfolio, one could choose any company, whereas VCs often need to compete to invest.

Recently, Bloomberg asked Goodson to repeat the feat. Here, we’ll take an in-depth look at the methodology behind the new list, and also broader trends set to flourish in the market.

Philippe J DEWOST's insight:

Rage against the machine. I suggest a close look to the list closing this post as one never knows...

No comment yet.
Rescooped by Philippe J DEWOST from pixels and pictures
Scoop.it!

Sony IMX500 - The World’s First AI Image Sensor Announced

Sony IMX500 - The World’s First AI Image Sensor Announced | cross pond high tech | Scoop.it

The announcement describes two new Intelligent Vision CMOS chip models, the Sony IMX500 and IMX501. From what I can tell these are the same base chip, except that the 500 is the bare chip product, whilst the 501 is a packaged product.

They are both 1/2.3” type chips with 12.3 effective megapixels. It seems clear that the one of the primary markets for the new chip is for security and system cameras. However having AI processes on the chip offers up some exciting new possibilities for future video cameras, particularly those mounted on drones or in action cameras like a GoPro or Insta 360.

 

One prominent ability of the new chip lies in functions such as object or person identification. This could be via tracking such objects, or in fact actually identifying them. Output from the new chip doesn’t have to be in image form either. Metadata can be output so that it can simply send a description of what it sees without the accompanying visual image. This can reduce the data storage requirement by up to 10,000 times.

For security or system camera purposes, a camera equipped with the new chip could count the number of people passing by it, or identifying low stock on a shop shelf. It could even be programmed to identify customer behaviour by way of heat maps.

 

For traditional cameras it could make autofocus systems better by being able to much more precisely identifying and tracking subjects. With AI systems like this, it could make autofocus systems more intelligent by identifying areas of a picture that you are likely to be focussing on. For example if you wanted to take a photograph of a flower, the AF system would know to focus on that rather than, say, the tree branch behind it. Facial recognition would also become much faster and more reliable.

Autofocus systems today are becoming incredibly good already, but if they were backed up by ultra fast on-chip object identification they could be even better. For 360 cameras, too, the ability to have more reliable object tracking metadata will help with post reframing.

Philippe J DEWOST's insight:

Sony announces in-sensor #AI image processing

Philippe J DEWOST's curator insight, May 23, 2020 8:58 AM

Capturing both pixels and "meaning".

Scooped by Philippe J DEWOST
Scoop.it!

Meet the World's Largest Chip : Inside the Cerebras CS-1 System

Meet the World's Largest Chip : Inside the Cerebras CS-1 System | cross pond high tech | Scoop.it

Cerebras Systems' announced its new CS-1 system here at Supercomputing 2019. The company unveiled its Wafer Scale Engine (WSE) at Hot Chips earlier this year, and the chip is almost as impressive as it is unbelievable: The world's largest chip, weighing in at an unbelievable 400,000 cores, 1.2 trillion transistors, 46,225 square millimeters of silicon, and 18 GB of on-chip memory, all in one chip that is as large as an entire wafer. Add in that the chip sucks 15kW of power and features 9 PB/s of memory bandwidth, and you've got a recipe for what is unquestionably the world's fastest AI processor.

 

Developing the chip was an incredibly complex task, but feeding all that compute enough power, not to mention enough cooling capacity, in a system reasonable enough for mass deployments is another matter entirely. Cerebras has pulled off that feat, and today the company unveiled the system and announced that the Argonne National Laboratory has already adopted it. The company also provided us detailed schematics of the internals of the system. 

Philippe J DEWOST's insight:

La puissance de calcul est un des leviers de la puissance tout court. Et l'Europe ne l'a toujours pas compris. Kalray ne suffira pas.

#HardwareIsNotDead

No comment yet.
Scooped by Philippe J DEWOST
Scoop.it!

OpenAI has published the text-generating AI it said was too dangerous to share

OpenAI has published the text-generating AI it said was too dangerous to share | cross pond high tech | Scoop.it

The research lab OpenAI has released the full version of a text-generating AI system that experts warned could be used for malicious purposes.

The institute originally announced the system, GPT-2, in February this year, but withheld the full version of the program out of fear it would be used to spread fake news, spam, and disinformation. Since then it’s released smaller, less complex versions of GPT-2 and studied their reception. Others also replicated the work. In a blog post this week, OpenAI now says it’s seen “no strong evidence of misuse” and has released the model in full.

 

GPT-2 is part of a new breed of text-generation systems that have impressed experts with their ability to generate coherent text from minimal prompts. The system was trained on eight million text documents scraped from the web and responds to text snippets supplied by users. Feed it a fake headline, for example, and it will write a news story; give it the first line of a poem and it’ll supply a whole verse.

It’s tricky to convey exactly how good GPT-2’s output is, but the model frequently produces eerily cogent writing that can often give the appearance of intelligence (though that’s not to say what GPT-2 is doing involves anything we’d recognize as cognition). Play around with the system long enough, though, and its limitations become clear. It particularly suffers with the challenge of long-term coherence; for example, using the names and attributes of characters consistently in a story, or sticking to a single subject in a news article.

The best way to get a feel for GPT-2’s abilities is to try it out yourself. You can access a web version at TalkToTransformer.com and enter your own prompts. (A “transformer” is a component of machine learning architecture used to create GPT-2 and its fellows.)

Philippe J DEWOST's insight:

The following article could have been written by the OpenAI published GPT-2 system. Or maybe not.
And while we're at it, has this comment here really been generated by me ?

No comment yet.
Scooped by Philippe J DEWOST
Scoop.it!

Google’s AI can now translate your speech while keeping your voice

Google’s AI can now translate your speech while keeping your voice | cross pond high tech | Scoop.it
Researchers trained a neural network to map audio “voiceprints” from one language to another. The results aren’t perfect, but you can sort of hear how Google’s translator was able to retain the voice and tone of the original speaker. It can do this because it converts audio input directly to audio output without any intermediary steps. In contrast, traditional translational systems convert audio into text, translate the text, and then resynthesize the audio, losing the characteristics of the original voice along the way. The new system, dubbed the Translatotron, has three components, all of which look at the speaker’s audio spectrogram—a visual snapshot of the frequencies used when the sound is playing, often called a voiceprint. The first component uses a neural network trained to map the audio spectrogram in the input language to the audio spectrogram in the output language. The second converts the spectrogram into an audio wave that can be played. The third component can then layer the original speaker’s vocal characteristics back into the final audio output. Not only does this approach produce more nuanced translations by retaining important nonverbal cues, but in theory it should also minimize translation error, because it reduces the task to fewer steps. Translatotron is currently a proof of concept. During testing, the researchers trialed the system only with Spanish-to-English translation, which already took a lot of carefully curated training data. But audio outputs like the clip above demonstrate the potential for a commercial system later down the line. You can listen to more of them here.
Philippe J DEWOST's insight:
This is the Voice
No comment yet.
Scooped by Philippe J DEWOST
Scoop.it!

Tesla Autonomy Day almost Full Report

Tesla Autonomy Day almost Full Report | cross pond high tech | Scoop.it

Cleantechnica has compiled the event video plus tons of liveblogging highlights from the event : there is a trove of insight about where Tesla is going and how they are going there.

They have designed their own FSD (Full Self Driving system) that doesn't need a LIDAR and "learns" from shadow mode driving of the whole deployed Tesla fleet. This is how they will be able to deploy Robotaxi mode with just a software update.

 

For instance,

“Early testing of new FSD hardware shows a 21× improvement in image processing capability with fully redundant computing capability.

“This is all done at a modest cost while delivering a fully redundant computing platform to all of Tesla’s vehicles currently in production.”

General summary from Kyle: “Our shit is really, really fast and we built it better than anyone else.”

Elon notes that Tesla finished this design 1½–2 years ago and then started on the next system design. They are not talking about the next design now, but they’re about halfway through it.

Some additional technical notes from Chanan Bos:

“An enthusiast Intel desktop i7 processor with 8 cores has 3 billion transistors, Tesla’s new chip has 6 billion. But that is till less than some crazy 18 core Intel ships like Skylake-X which has 8.33 billion transistors. An iPhone has about 2 billion.

“So SRAM is much faster but is more expensive and has less storage compared to DRAM.

“Nvidia Xavier (available early 2018) had 30 TOPS (Tera Operations Per Second). Tesla’s FSD chip has 144 TOPS.”

 

Philippe J DEWOST's insight:

Tesla is going vertical at full speed as it designs its own Full Self Driving System. This is what will enable Robotaxi mode and will slash the cost of owning a Tesla by 3x.

The following report contains almost all slides presented with an incredible level of details. A must read for anybody involved in Autonomous Vehicle technology and issues.

No comment yet.
Scooped by Philippe J DEWOST
Scoop.it!

Amazon Alexa scientists find ways to improve speech and sound recognition

Amazon Alexa scientists find ways to improve speech and sound recognition | cross pond high tech | Scoop.it

How do assistants like Alexa discern sound? The answer lies in two Amazon research papers scheduled to be presented at this year’s International Conference on Acoustics, Speech, and Signal Processing in Aachen, Germany. Ming Sun, a senior speech scientist in the Alexa Speech group, detailed them this morning in a blog post.

“We develop[ed] a way to better characterize media audio by examining longer-duration audio streams versus merely classifying short audio snippets,” he said, “[and] we used semisupervised learning to train a system developed from an external dataset to do audio event detection.”

 

The first paper addresses the problem of media detection — that is, recognizing when voices captured from an assistant originate from a TV or radio rather than a human speaker. To tackle this, Sun and colleagues devised a machine learning model that identifies certain characteristics common to media sound, regardless of content, to delineate it from speech.

 

Philippe J DEWOST's insight:

Alexa, listen to me, not the TV !

No comment yet.
Scooped by Philippe J DEWOST
Scoop.it!

AI Could Scan IVF Embryos to Help Make Babies More Quickly

AI Could Scan IVF Embryos to Help Make Babies More Quickly | cross pond high tech | Scoop.it

"In new research published today in NPJ Digital Medicine, scientists at Cornell University trained an off-the-shelf Google deep learning algorithm to identify IVF embryos as either good, fair, or poor, based on the likelihood each would successfully implant. This type of AI—the same neural network that identifies faces, animals, and objects in pictures uploaded to Google’s online services—has proven adept in medical settings. It has learned to diagnose diabetic blindness and identify the genetic mutations fueling cancerous tumor growth. IVF clinics could be where it’s headed next."

Philippe J DEWOST's insight:

Welcome to Gattac.ai

No comment yet.
Scooped by Philippe J DEWOST
Scoop.it!

Amazon patents new Alexa feature that knows when you're ill and offers you medicine

Amazon patents new Alexa feature that knows when you're ill and offers you medicine | cross pond high tech | Scoop.it

Amazon has patented a new version of its virtual assistant Alexa which can automatically detect when you’re ill and offer to sell you medicine.

The proposed feature would analyse speech and identify other signs of illness or emotion.

One example given in the patent is a woman coughing and sniffling while she speaks to her Amazon Echo device. Alexa first suggests some chicken soup to cure her cold, and then offers to order cough drops on Amazon.

If Amazon were to introduce this technology, it could compete with a service planned by the NHS. Health Secretary Matt Hancock said earlier this year that the NHS was working on making information from its NHS Choices online service available through Alexa.

Amazon’s system, however, doesn’t need to ask people whether they’re ill - it would just know automatically by analysing their speech.

Amazon's new patent filing shows an Amazon Echo device which knows when you're ill CREDIT: AMAZON
Adverts for sore throat products could be automatically played to people who sound like they have a sore throat, Amazon’s patent suggests.

The patent filing also covers the tracking of emotions using Alexa. Amazon describes a system where Alexa can tell by your voice if you’re feeling bored and tired, and then it would suggest things to do for those moods.

This futuristic version of Alexa would listen out for if users are crying and then class them as experiencing an “emotional abnormality.”

Philippe J DEWOST's insight:

This insanely smart move will help Amazon enter the healthcare and pharmacy markets and - maybe some day - the personal psycho market. Woody Allen, meet your therapist Alexa...

No comment yet.
Scooped by Philippe J DEWOST
Scoop.it!

AI Learns the Art of Debate with IBM Project Debater

AI Learns the Art of Debate with IBM Project Debater | cross pond high tech | Scoop.it

Today, an artificial intelligence (AI) system engaged in the first ever live, public debates with humans. At an event held at IBM’s Watson West site in San Francisco, a champion debater and IBM’s AI system, Project Debater, began by preparing arguments for and against the statement: “We should subsidize space exploration.” Both sides then delivered a four-minute opening statement, a four-minute rebuttal, and a two-minute summary.
Project Debater made an opening argument that supported the statement with facts, including the points that space exploration benefits human kind because it can help advance scientific discoveries and it inspires young people to think beyond themselves. Noa Ovadia, the 2016 Israeli national debate champion, opposed the statement, arguing that there are better applications for government subsidies, including subsidies for scientific research here on Earth. After listening to Noa’s argument, Project Debater delivered a rebuttal speech, countering with the view that potential technological and economic benefits from space exploration outweigh other government spending. Following closing summaries from both sides, a snap poll showed that a majority of audience members thought Project Debater enriched their knowledge more than its human counterpart.
Just think about that for a moment. An AI system engaged with an expert human debater, listened to her argument, and responded convincingly with its own, unscripted reasoning to persuade an audience to consider its position on a controversial topic. Later, we held a second debate between the system and another Israeli debate expert, Dan Zafrir, that featured opposing arguments on the statement: “We should increase the use of telemedicine.”

For the initial demonstrations of this new technology, we selected from a curated list of topics to ensure a meaningful debate. But Project Debater was never trained on the topics. Over time, and in relevant business applications, we will naturally move toward using the system for issues that haven’t been screened.
Project Debater moves us a big step closer to one of the great boundaries in AI: mastering language. It is the latest in a long line of major AI innovations at IBM, which also include “Deep Blue,” the IBM system that took on chess world champion Garry Kasparov in 1997, and IBM Watson, which beat the top human champions on Jeopardy! in 2011.
Project Debater reflects the mission of IBM Research today to develop broad AI that learns across different disciplines to augment human intelligence. AI assistants have become highly useful to us through their ability to conduct sophisticated keyword searches and respond to simple questions or requests (such as “how many ounces in a liter?” or “call Mom”). Project Debater explores new territory: it absorbs massive and diverse sets of information and perspectives to help people build persuasive arguments and make well-informed decisions.

Philippe J DEWOST's insight:

I am sure DeepMind would strongly disagree and would be curious about the outcome of an argument between Alexa, DeepMind, Siri and Watson ...

No comment yet.
Scooped by Philippe J DEWOST
Scoop.it!

Watch NASA's Autonomous Drone Race a Human Pilot

Watch NASA's Autonomous Drone Race a Human Pilot | cross pond high tech | Scoop.it
NASA put its obstacle avoidance and vision-based research to the test, by racing an A.I.-In October, NASA’s California-based Jet Propulsion Laboratory pitted a drone controlled by artificial intelligence against a professional human drone pilot named Ken Loo. According to NASA's press release, it had been researching autonomous drone technology for the past two years at that point, funded by Google and its interest in JPL’s vision-based navigation work. The race consisted of a time-trial where the lap times and behaviors of both the A.I.-operated drone and the manually-piloted drone were analyzed and compared. Let’s take a look at the results.NASA said in its release that the company developed three drones; Batman, Joker, and Nightwing. Researchers focused mostly on the intricate algorithms required to navigate efficiently through a race like this, namely obstacle avoidance and maximum speed through narrow environments. These algorithms were then combined with Google’s Tango technology, which JPL had a significant hand in as well. Task Manager of the JPL project, Rob Reid said, “We pitted our algorithms against a human, who flies a lot more by feel. You can actually see that the A.I. flies the drone smoothly around the course, whereas human pilots tend to accelerate aggressively, so their path is jerkier.” As it turned out, Loo’s speeds were much higher, and he was able to perform impressive aerial maneuvers to his benefit, but the A.I.-infused drones were more consistent, and never gave in to fatigue. “This is definitely the densest track I’ve ever flown,” said Loo. “One of my faults as a pilot is I get tired easily. When I get mentally fatigued, I start to get lost, even if I’ve flown the course 10 times.”Loo averaged 11.1 seconds per lap, while the autonomous unmanned aerial vehicles average 13.9 seconds. In other words, while Loo managed to reach higher speeds overall, the drones operating autonomously were more consistent, essentially flying a very similar lap and route each time. “Our autonomous drones can fly much faster,” said Reid. “One day you might see them racing professionally!” infused drone against a human opponent.
Philippe J DEWOST's insight:
Race against the machine : human pilot still beats NASA’s AI by 20% when it comes to fly a drone
No comment yet.
Scooped by Philippe J DEWOST
Scoop.it!

Meet Power - Your Magical Home, by the author of Bitproof and Peter.AI  

Meet Power - Your Magical Home, by the author of Bitproof and Peter.AI   | cross pond high tech | Scoop.it

From Louison Dumont's Facebook wall :

"The world is beating. Your heart is beating every X seconds, a flower takes Y days to flourish. If a tiger is running at you, your heart beats faster, if the flower receives more sun, it flourishes faster.


Computers so far have been like calculators. You would enter the commands, and get the result. Some fundamental commands got installed in every computer so that humans could enter higher level commands that would then execute lower level commands and produce astonishing outputs with little input.


What has been lacking however, is a synchronization between the computer's beating and the human's beating.


Of course, there is some synchronization already happening, using if trees and sometimes machine learning. But the synchronization is still so superficial that today's computers are basically blind, they force you to get out of your way to enter commands, they aren't aware of what is truly happening.

The future of computers is when they actually understand you, when they understand what humanity cares about.

 

This is why I started Power. We put chips inside bricks.

 

Because bricks are everywhere, bricks are where people spend most of their life. If we can turn on the bricks, we can turn on the people, and we can create the infrastructure required for the next wave of technological revolution to happen. With a new mesh of human-machine interactions and brick-to-brick (building-to-building) communication, we can create the future of Internet, decentralized and benefiting from a core understanding of human experience."

 
Philippe J DEWOST's insight:

Louison Dumont is back from his blockchain and AI ventures, and it seems he dropped the (his) chains to keep and combine Blocks and AI. Intriguing.

No comment yet.
Scooped by Philippe J DEWOST
Scoop.it!

Self-driving cars are coming, but US roads aren't ready for the change

Self-driving cars are coming, but US roads aren't ready for the change | cross pond high tech | Scoop.it
Many US Roads Need To Be Drastically Improved In Order For Self-Driving Cars To Have The Widespread Impact That Many Are Currently Predicting, Argues 3M Global Government Affairs Manager Dan Veoni In A Recent Op-Ed In The Hill.States And Localities Aren’t Making The Investments To Solve This Problem, And The Federal Government Isn’t Stepping In.Public-Private Partnerships Could Provide The Necessary Funding, But They Won’t Spring Up Overnight.
Philippe J DEWOST's insight:
It will all be about the dialog between vehicles and the infrastructure which supports them
No comment yet.