cross pond high tech
159.9K views | +7 today
Follow
cross pond high tech
light views on high tech in both Europe and US
Your new post is loading...
Your new post is loading...
Scooped by Philippe J DEWOST
Scoop.it!

YouTuber develops open-source 3D printed VR gloves for just $22

YouTuber develops open-source 3D printed VR gloves for just $22 | cross pond high tech | Scoop.it
A student YouTuber by the name of Lucas VRTech has designed and 3D printed a pair of low-cost finger tracking gloves for use in virtual reality. Named LucidVR, the open-source gloves are currently on iteration three, and grant users the ability to precisely track their fingers without the use of dedicated VR controllers. Lucas is […]
Philippe J DEWOST's insight:

Oblong's implementation of the Minority Report Gloves was certainly more expensive — https://vimeo.com/76468455

No comment yet.
Rescooped by Philippe J DEWOST from pixels and pictures
Scoop.it!

Sony IMX500 - The World’s First AI Image Sensor Announced

Sony IMX500 - The World’s First AI Image Sensor Announced | cross pond high tech | Scoop.it

The announcement describes two new Intelligent Vision CMOS chip models, the Sony IMX500 and IMX501. From what I can tell these are the same base chip, except that the 500 is the bare chip product, whilst the 501 is a packaged product.

They are both 1/2.3” type chips with 12.3 effective megapixels. It seems clear that the one of the primary markets for the new chip is for security and system cameras. However having AI processes on the chip offers up some exciting new possibilities for future video cameras, particularly those mounted on drones or in action cameras like a GoPro or Insta 360.

 

One prominent ability of the new chip lies in functions such as object or person identification. This could be via tracking such objects, or in fact actually identifying them. Output from the new chip doesn’t have to be in image form either. Metadata can be output so that it can simply send a description of what it sees without the accompanying visual image. This can reduce the data storage requirement by up to 10,000 times.

For security or system camera purposes, a camera equipped with the new chip could count the number of people passing by it, or identifying low stock on a shop shelf. It could even be programmed to identify customer behaviour by way of heat maps.

 

For traditional cameras it could make autofocus systems better by being able to much more precisely identifying and tracking subjects. With AI systems like this, it could make autofocus systems more intelligent by identifying areas of a picture that you are likely to be focussing on. For example if you wanted to take a photograph of a flower, the AF system would know to focus on that rather than, say, the tree branch behind it. Facial recognition would also become much faster and more reliable.

Autofocus systems today are becoming incredibly good already, but if they were backed up by ultra fast on-chip object identification they could be even better. For 360 cameras, too, the ability to have more reliable object tracking metadata will help with post reframing.

Philippe J DEWOST's insight:

Sony announces in-sensor #AI image processing

Philippe J DEWOST's curator insight, May 23, 2020 8:58 AM

Capturing both pixels and "meaning".

Scooped by Philippe J DEWOST
Scoop.it!

AWS launches its custom Inferentia AI chips

AWS launches its custom Inferentia AI chips | cross pond high tech | Scoop.it

At its re:Invent conference, AWS today announced the launch of its Inferentia chips, which it initially announced last year. These new chips promise to make inferencing, that is, using the machine learning models you pre-trained earlier, significantly faster and cost effective.

As AWS CEO Andy Jassy noted, a lot of companies are focusing on custom chips that let you train models (though Google and others would surely disagree there). Inferencing tends to work well on regular CPUs, but custom chips are obviously going to be faster. With Inferentia, AWS offers lower latency and three times the throughput at 40% lower cost per inference compared to a regular G4 instance on EC4.

The new Inf1 instances promise up to 2,000 TOPS and feature integrations with TensorFlow, PyTorch and MXNet, as well as the ONNX format for moving models between frameworks. For now, it’s only available in the EC2 compute service, but it will come to AWS’s container services and its SageMaker machine learning service soon, too.

Philippe J DEWOST's insight:

Amazon continues going vertical with custom AI chip design made available in its cloud offerings.

Philippe J DEWOST's curator insight, December 9, 2019 4:04 AM

La puissance de calcul est un des leviers de la puissance tout court - suite : même les libraires se mettent au design propriétaire de processeurs (et celui-ci est dédié à l'IA). On attend toujours le processeur de la FNAC ou le GPU de Cdiscount ... 

Scooped by Philippe J DEWOST
Scoop.it!

Meet the World's Largest Chip : Inside the Cerebras CS-1 System

Meet the World's Largest Chip : Inside the Cerebras CS-1 System | cross pond high tech | Scoop.it

Cerebras Systems' announced its new CS-1 system here at Supercomputing 2019. The company unveiled its Wafer Scale Engine (WSE) at Hot Chips earlier this year, and the chip is almost as impressive as it is unbelievable: The world's largest chip, weighing in at an unbelievable 400,000 cores, 1.2 trillion transistors, 46,225 square millimeters of silicon, and 18 GB of on-chip memory, all in one chip that is as large as an entire wafer. Add in that the chip sucks 15kW of power and features 9 PB/s of memory bandwidth, and you've got a recipe for what is unquestionably the world's fastest AI processor.

 

Developing the chip was an incredibly complex task, but feeding all that compute enough power, not to mention enough cooling capacity, in a system reasonable enough for mass deployments is another matter entirely. Cerebras has pulled off that feat, and today the company unveiled the system and announced that the Argonne National Laboratory has already adopted it. The company also provided us detailed schematics of the internals of the system. 

Philippe J DEWOST's insight:

La puissance de calcul est un des leviers de la puissance tout court. Et l'Europe ne l'a toujours pas compris. Kalray ne suffira pas.

#HardwareIsNotDead

No comment yet.
Scooped by Philippe J DEWOST
Scoop.it!

DNS-over-HTTPS will eventually roll out in all major browsers, despite ISP opposition

DNS-over-HTTPS will eventually roll out in all major browsers, despite ISP opposition | cross pond high tech | Scoop.it

All six major browser vendors have plans to support DNS-over-HTTPS (or DoH), a protocol that encrypts DNS traffic and helps improve a user's privacy on the web.

The DoH protocol has been one of the year's hot topics. It's a protocol that, when deployed inside a browser, it allows the browser to hide DNS requests and responses inside regular-looking HTTPS traffic.

Doing this makes a user's DNS traffic invisible to third-party network observers, such as ISPs. But while users love DoH and have deemed it a privacy boon, ISPs, networking operators, and cyber-security vendors hate it.

A UK ISP called Mozilla an "internet villain" for its plans to roll out DoH, and a Comcast-backed lobby group has been caught preparing a misleading document about DoH that they were planning to present to US lawmakers in the hopes of preventing DoH's broader rollout.

However, this may be a little too late. ZDNet has spent the week reaching out to major web browser providers to gauge their future plans regarding DoH, and all vendors plan to ship it, in one form or another.

Philippe J DEWOST's insight:

Moving up the stack and the value chain.

Encrypting DNS traffic into HTTPS helps improve user's #privacy on the Internet, and this rather technical piece explains how to activate it in most major browsers, except Apple's Safari.

No comment yet.
Scooped by Philippe J DEWOST
Scoop.it!

Bowery Raises $50M More For Indoor, Pesticide-Free Farms

Bowery Raises $50M More For Indoor, Pesticide-Free Farms | cross pond high tech | Scoop.it

Indoor farming startup Bowery announced today it has raised an additional $50M in an extension of its Series B round. This comes just nearly 11 months after it raised $90 million in a Series B round that we reported on at the time.

In a written statement, Bowery said the add-on was the result “of significant momentum in the business.” Temasek led the extension and Henry Kravis, co-founder of Kohlberg Kravis Roberts & Co., also put money in the “B+ round.” The financing brings the New York-based company’s total raised to $172.5 million since its inception in 2015, according to Bowery.

The startup, which aims to grow “sustainably grown produce,” also announced today its new indoor farm in the Baltimore-DC area. The new farm is 3.5 times larger than Bowery’s last facility, according to the company. Its network of farms “essentially communicate using Bowery’s software.” according to the company, and benefits from the collective intelligence of 2+ years of data.”

Philippe J DEWOST's insight:

Energy and Food for everyone on Earth are the foundational blocks of Mankind's fate and the greatest challenges of these times. Solving both will enable everything else and save what's left of us.

No comment yet.
Scooped by Philippe J DEWOST
Scoop.it!

OpenAI has published the text-generating AI it said was too dangerous to share

OpenAI has published the text-generating AI it said was too dangerous to share | cross pond high tech | Scoop.it

The research lab OpenAI has released the full version of a text-generating AI system that experts warned could be used for malicious purposes.

The institute originally announced the system, GPT-2, in February this year, but withheld the full version of the program out of fear it would be used to spread fake news, spam, and disinformation. Since then it’s released smaller, less complex versions of GPT-2 and studied their reception. Others also replicated the work. In a blog post this week, OpenAI now says it’s seen “no strong evidence of misuse” and has released the model in full.

 

GPT-2 is part of a new breed of text-generation systems that have impressed experts with their ability to generate coherent text from minimal prompts. The system was trained on eight million text documents scraped from the web and responds to text snippets supplied by users. Feed it a fake headline, for example, and it will write a news story; give it the first line of a poem and it’ll supply a whole verse.

It’s tricky to convey exactly how good GPT-2’s output is, but the model frequently produces eerily cogent writing that can often give the appearance of intelligence (though that’s not to say what GPT-2 is doing involves anything we’d recognize as cognition). Play around with the system long enough, though, and its limitations become clear. It particularly suffers with the challenge of long-term coherence; for example, using the names and attributes of characters consistently in a story, or sticking to a single subject in a news article.

The best way to get a feel for GPT-2’s abilities is to try it out yourself. You can access a web version at TalkToTransformer.com and enter your own prompts. (A “transformer” is a component of machine learning architecture used to create GPT-2 and its fellows.)

Philippe J DEWOST's insight:

The following article could have been written by the OpenAI published GPT-2 system. Or maybe not.
And while we're at it, has this comment here really been generated by me ?

No comment yet.
Scooped by Philippe J DEWOST
Scoop.it!

Huawei’s new 4K Vision TV claims voice, facial recognition, and tracking among a long list of AI powers

Huawei’s new 4K Vision TV claims voice, facial recognition, and tracking among a long list of AI powers | cross pond high tech | Scoop.it

Huawei announced its own 4K television, the Huawei Vision, during the Mate 30 Pro event today. Like the Honor Vision and Vision Pro TVs that were announced back in August, Huawei’s self-branded TV runs the company’s brand-new Harmony OS software as its foundation.

Huawei will offer 65-inch and 75-inch models to start, with 55-inch and 85-inch models coming later. The Huawei TV features quantum dot color, thin metal bezels, and a pop-up camera for video conferencing that lowers into the television when not in use. On TVs, Harmony OS is able to serve as a hub for smart home devices that support the HiLink platform.

Huawei is also touting the TV’s AI capabilities, likening it to a “smart speaker with a big screen.” The TV supports voice commands and includes facial recognition and tracking capabilities. Apparently, there’s some AI mode that helps protect the eyes of young viewers — presumably by filtering blue light. The Vision also allows “one-hop projection” from a Huawei smartphone. The TV’s remote has a touchpad and charges over USB-C.

Philippe J DEWOST's insight:

TV is now watching you watching TV : is this smart ?

Philippe J DEWOST's curator insight, September 25, 2019 12:47 AM

Still think YOU are watching TV ?

Scooped by Philippe J DEWOST
Scoop.it!

The Epstein scandal at MIT shows the moral bankruptcy of techno-elites

The Epstein scandal at MIT shows the moral bankruptcy of techno-elites | cross pond high tech | Scoop.it

The MIT-Epstein debacle shows ‘the prostitution of intellectual activity’ and calls for a radical agenda pleads Evgeny Morozov.

As Frederic Filloux points in today's edition of The Monday Note,

"It matters because the MediaLab scandal is the tip of the iceberg. American universities are plagued by conflicts of interest. It is prevalent at Stanford for instance. I personally don’t mind an experienced professor charging $3,000 an hour to talk to foreign corporate visitors or asking $15,000 to appear at a conference. These people are high-valued and they also often work for free when needed. What bothers me is when a board membership collides with the content of a class, when a research paper is redacted to avoid upsetting a powerful VC firm who provides both generous donations and advisory fees to the faculty, when a prominent professor regurgitates a paid study they have done for a foreign bank as a support of a class, or when another keeps hammering Google because they advise a direct competitor. This is unethical and offensive to students who regularly pay $60,000-$100,000 in tuition each year."

Philippe J DEWOST's insight:

MIT sounds like "Money Infused Technology" : shall we close the Media Lab, disband Ted Talks and refuse tech billionaires money ?

No comment yet.
Scooped by Philippe J DEWOST
Scoop.it!

Chandrayaan-2: Vikram's orbit reduced, gets closer to landing

Chandrayaan-2: Vikram's orbit reduced, gets closer to landing | cross pond high tech | Scoop.it

Operating independently for the first time since Chandrayaan-2was launched on July 22, Vikram, the lander, underwent its first manoeuvre around Moon.
Isro successfully completed the first de-orbiting manoeuvre at 8.50 am Tuesday, using for the first time, the propulsion systems on Vikram. All these days all operations were carried out by systems on the orbiter, from which Vikram, carrying Pragyan (rover) inside it, separated from on Monday.
"The duration of the maneuver was 4 seconds. The orbit of Vikram is 104kmX128 km, the Chandrayaan-2 orbiter continues to orbit Moon in the existing orbit and both the orbiter and lander are healthy," Isro said.

The next de-orbiting maneuver is scheduled on September 04 between 3.30 am and 4.30 am.
As reported by TOI, the landing module (Vikram and Pragyan) successfully separated from the orbiter at 1.15 pm Monday (September 2), pushing India's Chandrayaan-2 mission into the last and most crucial leg: Moon landing.
"The operation was great in the sense that we were able to separate the lander and rover from the orbiter—It is the first time in the history of Isro that we've separated two modules in space. This was very critical and we did it very meticulously," Isro chairman K Sivan told TOI soon after the separation.

Philippe J DEWOST's insight:

From M3 to M4 ? India gets closer to join the club of Moon countries, after Israël missed the last step of its application. Meanwhile Europe is still reviewing its application form.

No comment yet.
Scooped by Philippe J DEWOST
Scoop.it!

Intel's Pohoiki Beach is a neuromorphic computer capable of simulating 8 million neurons

Intel's Pohoiki Beach is a neuromorphic computer capable of simulating 8 million neurons | cross pond high tech | Scoop.it
At DARPA's Electronics Resurgence Initiative 2019 in Michigan, Intel introduced a new neuromorphic computer capable of simulating 8 million neurons. Neuromorphic engineering, also known as neuromorphic computing, describes the use of systems containing electronic analog circuits to mimic neuro-biological architectures present in the nervous system. Scientists at MIT, Perdue, Stanford, IBM, HP, and elsewhere have pioneered pieces of full-stack systems, but arguably few have come closer than Intel when it comes to tackling one of the longstanding goals of neuromorphic research — a supercomputer a thousand times more powerful than any today. Case in point? Today during the Defense Advanced Research Projects Agency’s (DARPA) Electronics Resurgence Initiative 2019 summit in Detroit, Michigan, Intel unveiled a system codenamed “Pohoiki Beach,” a 64-chip computer capable of simulating 8 million neurons in total. Intel Labs managing director Rich Uhlig said Pohoiki Beach will be made available to 60 research partners to “advance the field” and scale up AI algorithms like spare coding and path planning. “We are impressed with the early results demonstrated as we scale Loihi to create more powerful neuromorphic systems. Pohoiki Beach will now be available to more than 60 ecosystem partners, who will use this specialized system to solve complex, compute-intensive problems,” said Uhlig. Pohoiki Beach packs 64 128-core, 14-nanometer Loihi neuromorphic chips, which were first detailed in October 2017 at the 2018 Neuro Inspired Computational Elements (NICE) workshop in Oregon. They have a 60-millimeter die size and contain over 2 billion transistors, 130,000 artificial neurons, and 130 million synapses, in addition to three managing Lakemont cores for task orchestration. Uniquely, Loihi features a programmable microcode learning engine for on-chip training of asynchronous spiking neural networks (SNNs) — AI models that incorporate time into their operating model, such that components of the model don’t process input data simultaneously. This will be used for the implementation of adaptive self-modifying, event-driven, and fine-grained parallel computations with high efficiency.
Philippe J DEWOST's insight:
Intel inside (your brain)
No comment yet.
Scooped by Philippe J DEWOST
Scoop.it!

No Slack for you! Microsoft puts rival app on internal list of ‘prohibited and discouraged’ software

No Slack for you! Microsoft puts rival app on internal list of ‘prohibited and discouraged’ software | cross pond high tech | Scoop.it

Slack is on an internal Microsoft list of prohibited technology — software, apps, online services and plug-ins that the company doesn’t want its employees using as part of their day-to-day work. But the document, obtained by GeekWire, asserts that the primary reason is security, not competition. And Slack is just one of many on the list.

 

GeekWire obtained an internal Microsoft list of prohibited and discouraged technology — software and online services that the company doesn’t want its employees using as part of their day-to-day work. We first picked up on rumblings of the prohibition from Microsoft employees who were surprised that they couldn’t use Slack at work, before tracking down the list and verifying its authenticity.

While the list references the competitive nature of these services in some situations, the primary criteria for landing in the “prohibited” category are related to IT security and safeguarding company secrets.

Slack is on the “prohibited” category of the internal Microsoft list, along with tools such as the Grammarly grammar checker and Kaspersky security software. Services in the “discouraged” category include Amazon Web Services, Google Docs, PagerDuty and even the cloud version of GitHub, the popular software development hub and community acquired by Microsoft last year for $7.5 billion.

Philippe J DEWOST's insight:

Microsoft prohibits Slack at work.

No comment yet.
Rescooped by Philippe J DEWOST from pixels and pictures
Scoop.it!

Xiaomi hits back at Oppo with an ‘under-display’ camera of its own

Xiaomi hits back at Oppo with an ‘under-display’ camera of its own | cross pond high tech | Scoop.it

Just hours after Oppo revealed the world’s first under-display camera, Xiaomi has hit back with its own take on the new technology. Xiaomi president Lin Bin has posted a video to Weibo (later re-posted to Twitter) of the Xiaomi Mi 9 with a front facing camera concealed entirely behind the phone’s screen. That means the new version of the handset has no need for any notches, hole-punches, or even pop-up selfie cameras alongside its OLED display.

It’s not entirely clear how Xiaomi’s new technology works. The Phone Talks notes that Xiaomi recently filed for a patent that appears to cover similar functionality, which uses two alternately-driven display portions to allow light to pass through to the camera sensor.

Philippe J DEWOST's insight:

Days of the front facing camera may be over as Chinese smartphone makers race for under display selfie sensors. Even if availability dates are uncertain - like for foldable screens -, it is interesting to watch the hardware tech innovation shift towards China.

Philippe J DEWOST's curator insight, June 4, 2019 3:38 AM

Days of the front facing camera may be over as Chinese smartphone makers race for under display selfie sensors.

Even if availability dates are uncertain - like for foldable screens -, it is interesting to watch the hardware tech innovation shift towards China.

Scooped by Philippe J DEWOST
Scoop.it!

Microsoft acquires Nuance Communications

Microsoft acquires Nuance Communications | cross pond high tech | Scoop.it

Microsoft acquired the AI speech technology company Nuance for $19.7B, its second-largest purchase after it bought LinkedIn for $26B in 2016. Microsoft reportedly wants to use Nuance's tech — which includes the transcription tool Dragon — in its health-care cloud products.

More:

  • The all-cash deal is expected to boost Microsoft's voice recognition and medical computing capabilities and offerings.
  • Dragon uses deep learning to transcribe a person's speech and improve its accuracy by adapting to their voice. It can transcribe doctor's visits, customer service calls, and voicemails.
  • Nuance has been licensing the technology to companies for years. The tech formed part of the basis for Apple's Siri, which could pose as a conflict of interest between the companies if it is still involved in Siri's operation.
  • In 2019, Microsoft and Nuance announced a partnership to incorporate AI assistants into doctors' visits. They later integrated Nuance's tech into Microsoft’s Teams.
  • The tech giant plans to implement Nuance into its cloud-based health-tech products launched in 2020, such as patient monitoring systems, electronic healthcare records, and care coordination.
  • The acquisition could also allow Microsoft to integrate advanced voice recognition into services including Teams and Bing and generate transcripts, according to Bloomberg analysts.
  • Microsoft will purchase Nuance for $56 per share, a 23% premium over its closing price Friday.
Philippe J DEWOST's insight:

Microsoft raises its voice and swallows Nuance.

No comment yet.
Scooped by Philippe J DEWOST
Scoop.it!

Amazon, Verizon partner on new 5G WaveLength product

Amazon, Verizon partner on new 5G WaveLength product | cross pond high tech | Scoop.it

Amazon is using next-generation 5G wireless networks to help businesses download data from the cloud faster.

At Amazon Web Services’ annual re:Invent conference on Tuesday, AWS CEO Andy Jassy said the company is introducing a new service, called WaveLength, which puts technology from AWS “at the edge of the 5G network,” or closer to users’ devices. It has the potential to deliver single-digit millisecond latencies to users, according to Amazon.

At launch, Amazon is partnering with Verizon to incorporate WaveLength technology into parts of its wireless network. Amazon is also working with other global partners, such as Vodafone, KDDI and SK Telecom.

Lower latency is one of the big benefits that’s expected to arrive with 5G networks. This means it doesn’t take as long for devices to communicate with each other. For users, it results in fewer disruptions and shorter lag times when streaming videos, among other applications. 5G has the potential for many business-to-business applications, such as improving connectivity of IoT devices in manufacturing, self-driving, health care and other areas, in addition to consumer applications, such as faster streaming on phones.

“The connectivity and the speed is just two things,” Verizon CEO Hans Vestberg said on stage Tuesday at AWS’ re:Invent conference. “We can with 5G now bring the processing out to the edge because we have a virtualized network.”

With the partnership, AWS’ compute, storage, database and analytics tools are all “embedded” at the edge of 5G networks, Jassy said in an interview with CNBC’s Jon Fortt that aired Tuesday.

“That means now you only go from the device to the metro aggregation site, which is where the 5G tower is, where AWS is embedded there, and you get AWS,” Jassy said. “So it totally changes the response rates and the latency and what you can get done.”

Amazon is launching WaveLength at a time when excitement is ramping up around 5G networks. The technology is expected to be used more broadly by device makers, carriers and cable companies in 2020.

Philippe J DEWOST's insight:

Interesting announcement made by Amazon about an AWS technology being embedded in 5G towers, rather than an offering announced by Verizon with Amazon as a pioneer customer...

No comment yet.
Scooped by Philippe J DEWOST
Scoop.it!

Scientists have found a way to decode brain signals into speech

Scientists have found a way to decode brain signals into speech | cross pond high tech | Scoop.it

You don’t have to think about it: when you speak, your brain sends signals to your lips, tongue, jaw, and larynx, which work together to produce the intended sounds.

Now scientists in San Francisco say they’ve tapped these brain signals to create a device capable of spitting out complete phrases, like “Don’t do Charlie’s dirty dishes” and “Critical equipment needs proper maintenance.”

The research is a step toward a system that would be able to help severely paralyzed people speak—and, maybe one day, consumer gadgets that let anyone send a text straight from the brain. 

A team led by neurosurgeon Edward Chang at the University of California, San Francisco, recorded from the brains of five people with epilepsy, who were already undergoing brain surgery, as they spoke from a list of 100 phrases.

When Chang’s team subsequently fed the signals to a computer model of the human vocal system, it generated synthesized speech that was about half intelligible.

The effort doesn’t pick up on abstract thought, but instead listens for nerves firing as they tell your vocal organs to move. Previously, researchers have used such motor signals from other parts of the brain to control robotic arms.

“We are tapping into the parts of the brain that control these movements—we are trying to decode movements, rather than speech directly,” says Chang.

In Chang’s experiment, the signals were recorded using a flexible pad of electrodes called an electrocorticography array, or ECoG, that rests on the brain’s surface.

To test how well the signals could be used to re-create what the patients had said, the researchers played the synthesized results to people hired on Mechanical Turk, a crowdsourcing site, who tried to transcribe them using a pool of possible words.  Those listeners could understand about 50 to 70% of the words, on average.

“This is probably the best work being done in BCI [brain-computer interfaces] right now,” says Andrew Schwartz, a researcher on such technologies at the University of Pittsburgh. He says if researchers were to put probes within the brain tissue, not just overlying the brain, the accuracy could be far greater.

Previous efforts have sought to reconstruct words or word sounds from brain signals. In January of this year, for example, researchers at Columbia University measured signals in the auditory part of the brain as subjects heard someone else speak the numbers 0 to 9. They were then able to determine what number had been heard.

Brain-computer interfaces are not yet advanced enough, nor simple enough, to assist people who are paralyzed, although that an objective of scientists.

Last year, another researcher at UCSF began recruiting people with ALS, or Lou Gehrig’s disease, to receive ECoG implants. That study will attempt to synthesize speech, according to a description of the trial, as well as asking patients to control an exoskeleton supporting their arms.

 

Chang says his own system is not being tested in patients. And it remains unclear if it would work for people unable to move their mouth. The UCSF team says that their set-up didn’t work nearly as well when they asked speakers to silently mouth words instead of saying them aloud.

Some Silicon Valley companies have said they hope to develop commercial thought-to-text brain readers. One of them, Facebook, says it is funding related research at UCSF “to demonstrate the first silent speech interface capable of typing 100 words per minute,” according to a spokesperson.

Facebook didn’t pay for the current study and UCSF declined to described what further research it's doing on behalf of the social media giant. But Facebook says it sees the implanted system is a step towards the type of consumer device it wants to create.

“This goal is well aligned with UCSF's mission to develop an implantable communications prosthesis for people who cannot speak – a mission we support. Facebook is not developing products that require implantable devices, but the research at UCSF may inform research into non-invasive technologies,” the company said.

Chang says he is “not aware” of any technology able to work from outside the brain, where the signals mix together and become difficult to read.

“The study that we did was involving people having neurosurgery. We are really not aware of currently available noninvasive technology that could allow you to do this from outside the head,” he says. “Believe me, if it did exist it would have profound medical applications.”

Philippe J DEWOST's insight:

A still distant step towards a system that would ultimately let people send texts straight from their brains.

No comment yet.
Scooped by Philippe J DEWOST
Scoop.it!

Hyderabad based Fireflies.ai, founded by MIT & Microsoft alumni, raises $5m to put a voice assistant in every meeting

Hyderabad based Fireflies.ai, founded by MIT & Microsoft alumni, raises $5m to put a voice assistant in every meeting | cross pond high tech | Scoop.it

How Fireflies.ai works? ​Users can connect their Google or Outlook calendars with Fireflies and have our AI system capture meetings in real-time across more than a dozen different web-conferencing platforms like Zoom, Google Meet, Skype, GoToMeeting, Webex, and many ​more ​systems. These meetings are then indexed, transcribed, and made searchable inside the Fireflies dashboard. You can comment, annotate key moments, and automatically extract relevant information around numerous topics like the next steps, questions, and red flags.

Instead of spending time frantically taking notes in meetings, Fireflies users take comfort knowing that shortly after a meeting they are provided with a transcript of the conversation and an easy way to collaborate on the project going forward.

Fireflies can also sync all this vital information back into the places where you already work thanks to robust integrations with Slack, Salesforce, Hubspot, and other platforms.

Fireflies.ai is the bridge that helps data flow seamlessly from your communication systems to your system of records.

This approach is possible today because of major technological changes over the last 5 years in the field of machine learning. Fireflies leverage recent enhancements in Automatic Speech Recognition (ASR), natural language processing (NLP), and neural nets to create a seamless way for users to record, annotate, search, and share important moments from their meetings.

Who is Fireflies for? ​The beauty of Fireflies is that it’s been adopted by people in different roles across organizations big and small:

  • Sales managers​ use Fireflies to review their reps’ calls at lightning speed and provide on the spot coaching
  • Marketers ​create key customer soundbites from calls to use in their campaigns.
  • Recruiters ​no longer worry about taking hasty notes and instead spend more time paying attention to candidates during interviews.
  • Engineers ​refer back to specific parts of calls using our smart search capabilities to make everyone aware of the decisions that were finalized.
  • Product managers and executives​ rely on Fireflies to document knowledge and important initiatives that are discussed during all-hands and product planning meetings on how to get access ​Fireflies have a free tier for individuals and teams to easily get started. For more advanced capabilities like augmented call search, more storage, and admin controls, we offer different tiers for growing teams and enterprises. You can learn more about our pricing and tiers by going to fireflies.ai/pricing.

 

Philippe J DEWOST's insight:

Et si le compte-rendu d'une réunion était automatique ? Et si la distribution des décisions prises et leur suivi l'étaient aussi ?

Plus besoin de taper sur son clavier et de polluer le meeting, plus besoin d'y passer un temp précieux...

C'est la promesse de cette nouvelle application à base d'Intelligence artificielle (lire : de reconnaissance automatisée de contenu et de contexte).

Restons cependant prudents ; la dictée vocale est un fantasme régulièrement déçu depuis les années 1990 et Dragon Dictate sur PC, puis les années 2009 et le scandale SpinVox sur mobile. Désormais les réserves se porteront plus sur l'arbitrage entre vie privée et efficacité, et la partie n'est pas nécessairement gagnée.

On peut au moins reconnaître à Firefly.ai le mérite de s'attaquer de nouveau à la reconnaissance vocale...

Philippe J DEWOST's curator insight, December 2, 2019 3:27 AM

What if meeting notes were automatically generated and made available shortly after the conference call ? What if action items were assigned too ?

No more need for post processing, nor in meeting typing pollution : here is #AI (read "automated pattern detection and in context recognition") 's promised made by Firefly.

History reminds us how cautiously we shall face the longstanding fantasy of voice dictation (not speaking here of voice assistants) : Dragon Dictate in the 1990's never lived up to the promise, not did 

SpinVox in 2009 (it ended in tears). Now with growing concerns on the privacy vs. convenience balance, war is still not over.

Scooped by Philippe J DEWOST
Scoop.it!

The Micromobility Mirage

“Mirage” is the polite world I use to characterize the idea that small individual transportation vehicles could be the solution to urban congestion or pollution. Micromobility is not an innovation…

 

Micromobility is socially selective, environmentally unfriendly, and it is not even supported by a sustainable model. But it is still well-hyped and it continues to draw large investments. Here is why.
“Mirage” is the polite world I use to characterize the idea that small individual transportation vehicles could be the solution to urban congestion or pollution.


Micromobility is not an innovation. It is a bad remedy to a failure of the public apparatus — at the state or the municipal level — unable to develop adequate infrastructure despite opulent fiscal bases.

Philippe J DEWOST's insight:

Excellent article (en anglais) de Fréderic @Filloux décrivant une bulle de $ 4,4 milliards. La micromobilité, bien que "socialement discriminante, anti écologique, et économiquement insoutenable, reste ultra-tendance tandis qu'elle continue de susciter des investissements massifs." (ndlr : traduction de moi)

No comment yet.
Scooped by Philippe J DEWOST
Scoop.it!

Text messages delayed from February were mysteriously sent overnight

Text messages delayed from February were mysteriously sent overnight | cross pond high tech | Scoop.it

Something strange is happening with text messages in the US right now. Overnight, a multitude of people received text messages that appear to have originally been sent on or around Valentine’s Day 2019. These people never received the text messages in the first place; the people who sent the messages had no idea that they had never been received, and they did nothing to attempt to resend them overnight.

Delayed messages were sent from and received by both iPhones and Android phones, and the messages seem to have been sent and received across all major carriers in the US. Many of the complaints involve T-Mobile or Sprint, although AT&T and Verizon have been mentioned as well. People using regional US carriers, carriers in Canada, and even Google Voice also seem to have experienced delays.

 

At fault seems to be a system that multiple cell carriers use for messaging. A Sprint spokesperson said a “maintenance update” last night caused the error. “The issue was resolved not long after it occurred,” the spokesperson said. “We apologize for any confusion this may have caused.”

T-Mobile blamed the issue on a “third party vendor.” It didn’t clarify what company that was or what service they provided. “We’re aware of this and it is resolved,” a T-Mobile spokesperson said.

The statements speak to why the messages were sent last night, but it’s still unknown why the messages were all from Valentine’s Day and weren’t sent in the first place. The Verge has asked Sprint and T-Mobile to provide more information about what happened.

Dozens and dozens of people have posted about receiving messages overnight. Most expressed confusion or spoke to the awkwardness of the situation, having been told by friends that they sent a mysterious early-morning text message. A few spoke to much more distressing repercussions of this error: one person said they received a message from an ex-boyfriend who had died; another received messages from a best friend who is now dead.

“It was a punch in the gut. Honestly I thought I was dreaming and for a second I thought she was still here,” said one person, who goes by KuribHoe on Twitter, who received the message from their best friend who had died. “The last few months haven’t been easy and just when I thought I was getting some type of closure this just ripped open a new hole.”

Barbara Coll, who lives in California, said she received an old message from her sister saying that their mom was upbeat and doing well. She knew the message must have been sent before their mother died in June, but she said it was still shocking to receive.

“I haven’t stopped thinking about that message since I got it,” Coll said. “I’m out looking at the ocean right now because I needed a break.” Coll said her sister also received a delayed message that she had sent about planning to visit to see their mother.

Another person said a text message that she sent in February was received at 5AM by someone who is now her ex-boyfriend. The result was “a lot of confusion,” said Jamie. But she said that “it was actually kinda nice that it opened up a short conversation.”

The Verge has reached out to Verizon, AT&T, and Google for comment.

Philippe J DEWOST's insight:

SMS was never intended to be a consumer messaging platform, nor to guarantee text messages delivery.

It was a technical extension of the GSM standard designed for one way asynchronous service messages between the network and devices : in a way, text messaging was about hacking a network feature..

No comment yet.
Scooped by Philippe J DEWOST
Scoop.it!

This $2800 "concept phone" is almost entirely made of screen

This $2800 "concept phone" is almost entirely made of screen | cross pond high tech | Scoop.it

Xiaomi’s Mi Mix series has always pushed the boundaries of phone screens and form factors, from the original model that kicked off the bezel wars to last year’s sliding, notchless Mi Mix 3. Now, just as we’re starting to see “waterfall” displays with extreme curved edges, Xiaomi is taking this to a wild new level with the Mi Mix Alpha.

The “surround screen” on the Alpha wraps entirely around the device to the point where it meets the camera module on the other side. The effect is of a phone that’s almost completely made of screen, with status icons like network signal and battery charge level displayed on the side. Pressure-sensitive volume buttons are also shown on the side of the phone. Xiaomi is claiming more than 180 percent screen-to-body ratio, a stat that no longer makes any sense to cite at all.

The Mix Alpha uses Samsung’s new 108-megapixel camera sensor, which was co-developed with Xiaomi. As with other recent high-resolution Samsung sensors, pixels are combined into 2x2 squares for better light sensitivity in low light, which in this case will produce 27-megapixel images.

We’ll have to see how that works in practice, but the 1/1.33-inch sensor is unusually large for a phone and should give the Mix Alpha a lot of light-gathering capability. There’s also no need for a selfie camera — you just turn the phone around and use the rear portion of the display as a viewfinder for the 108-megapixel shooter.

 

Philippe J DEWOST's insight:

Xiaomi Mi Mix Alpha display wraps around the entire phone, which brings some interesting possibilities (no need for a front end selfie camera) as well as questions (will it break as the design forbids any covers and protections ? 

No comment yet.
Scooped by Philippe J DEWOST
Scoop.it!

McDonald’s acquires Apprente to bring voice technology to drive-thrus

McDonald’s acquires Apprente to bring voice technology to drive-thrus | cross pond high tech | Scoop.it

McDonald’s is increasingly looking at tech acquisitions as a way to reinvent the fast-food experience. Today, it’s announcing that it’s buying Apprente, a startup building conversational agents that can automate voice-based ordering in multiple languages.

If that sounds like a good fit for fast-food drive thru, that’s exactly what McDonald’s leadership has in mind. In fact, the company has already been testing Apprente’s technology in select locations, creating voice-activated drive-thrus (along with robot fryers) that it said will offer “faster, simpler and more accurate order taking.”

McDonald’s said the technology also could be used in mobile and kiosk ordering. Presumably, besides lowering wait times, this could allow restaurants to operate with smaller staffs.

Earlier this year, McDonald’s acquired online personalization startup Dynamic Yield for more than $300 million, with the goal of creating a drive-thru experience that’s customized based on things like weather and restaurant traffic. It also invested in mobile app company Plexure.

Now the company is looking to double down on its tech investments by creating a new Silicon Valley-based group called McD Tech Labs, with the Apprente team becoming the group’s founding members, and Apprente co-founder Itamar Arel becoming vice president of McD Tech Labs. McDonald’s said it will expand the team by hiring more engineers, data scientists and other tech experts.

Philippe J DEWOST's insight:

Voice Activated BigMac is getting closer as McDonald's enters the conversational, voice driven AI space.

And instead of partnering with suppliers, they do this by acquiring technology startups and integrating them in Silicon Valley based McD Tech Labs.

No comment yet.
Scooped by Philippe J DEWOST
Scoop.it!

A Successful Artificial Memory Has Been Created

A Successful Artificial Memory Has Been Created | cross pond high tech | Scoop.it

We learn from our personal interaction with the world, and our memories of those experiences help guide our behaviors. Experience and memory are inexorably linked, or at least they seemed to be before a recent report on the formation of completely artificial memories. Using laboratory animals, investigators reverse engineered a specific natural memory by mapped the brain circuits underlying its formation. They then “trained” another animal by stimulating brain cells in the pattern of the natural memory. Doing so created an artificial memory that was retained and recalled in a manner indistinguishable from a natural one.

Memories are essential to the sense of identity that emerges from the narrative of personal experience. This study is remarkable because it demonstrates that by manipulating specific circuits in the brain, memories can be separated from that narrative and formed in the complete absence of real experience. The work shows that brain circuits that normally respond to specific experiences can be artificially stimulated and linked together in an artificial memory. That memory can be elicited by the appropriate sensory cues in the real environment. The research provides some fundamental understanding of how memories are formed in the brain and is part of a burgeoning science of memory manipulation that includes the transfer, prosthetic enhancement and erasure of memory. These efforts could have a tremendous impact on a wide range of individuals, from those struggling with memory impairments to those enduring traumatic memories, and they also have broad social and ethical implications.

 

Philippe J DEWOST's insight:

Artificial Memory is not what you think. Especially as I remember reading this article in Playboy magazine.

No comment yet.
Scooped by Philippe J DEWOST
Scoop.it!

Teen claims to tweet from her smart fridge – but did she really?

Teen claims to tweet from her smart fridge – but did she really? | cross pond high tech | Scoop.it
A Twitter user’s claim to have tweeted from a kitchen appliance went viral but experts have cast doubt
Philippe J DEWOST's insight:

Le Frigo est-il Twitto ?

No comment yet.
Scooped by Philippe J DEWOST
Scoop.it!

Elon Musk unveils Neuralink’s plans for brain-reading ‘threads’ - but what about brain-writing ?

Elon Musk unveils Neuralink’s plans for brain-reading ‘threads’ - but what about brain-writing ? | cross pond high tech | Scoop.it

Elon Musk’s Neuralink, the secretive company developing brain-machine interfaces, showed off some of the technology it has been developing to the public for the first time. The goal is to eventually begin implanting devices in paralyzed humans, allowing them to control phones or computers.

The first big advance is flexible “threads,” which are less likely to damage the brain than the materials currently used in brain-machine interfaces. These threads also create the possibility of transferring a higher volume of data, according to a white paper credited to “Elon Musk & Neuralink.” The abstract notes that the system could include “as many as 3,072 electrodes per array distributed across 96 threads.”

The threads are 4 to 6 μm in width, which makes them considerably thinner than a human hair. In addition to developing the threads, Neuralink’s other big advance is a machine that automatically embeds them.

Musk gave a big presentation of Neuralink’s research Tuesday night, though he said that it wasn’t simply for hype. “The main reason for doing this presentation is recruiting,” Musk said, asking people to go apply to work there. Max Hodak, president of Neuralink, also came on stage and admitted that he wasn’t originally sure “this technology was a good idea,” but that Musk convinced him it would be possible.

In the future, scientists from Neuralink hope to use a laser beam to get through the skull, rather than drilling holes, they said in interviews with The New York Times. Early experiments will be done with neuroscientists at Stanford University, according to that report. “We hope to have this in a human patient by the end of next year,” Musk said.

During a Q&A at the end of the presentation, Musk revealed results that the rest of the team hadn’t realized he would: “A monkey has been able to control a computer with its brain.”

Philippe J DEWOST's insight:

Brain-reading sounds like a great challenge and a huge promise. But what about the logical next step that would be brain writing ?

Worrisome when you realize that the USB-c interface is symmetrical and also allows charging... (well, technically all USB cables are bidirectional but at least symbolically there is a "master" and a "slave" side in the form factor)

No comment yet.
Scooped by Philippe J DEWOST
Scoop.it!

The end of mobile

The end of mobile | cross pond high tech | Scoop.it

There are about 5.3bn people on earth aged over 15. Of these, around 5bn have a mobile phone and 4bn a smartphone. The platform wars ended a long time ago, and Apple and Google both won (outside China, at least). So it is time to stop making charts.

Philippe J DEWOST's insight:

5bn people have a mobile phone now, and 4bn have a smartphone. This interesting post details usage as well as it draws a perspective with the PC market. And concludes it is time to stop making charts.

No comment yet.