UPDATED 09:00 EDT / DECEMBER 09 2021

AI

Headroom launches AI-enhanced videoconferencing to bring humanity back to meetings

Headroom Inc., a startup that provides a videoconferencing platform, today announced the launch of its service to take on Zoom, Meet and Teams by using artificial intelligence to transform meetings and make them more human.

With the global pandemic, more workforces have gone to remote and hybrid, making videoconferencing more of a norm. This also gave rise to an effect known as “Zoom fatigue.” Being arms-length away from the screen, locked to the keyboard and watching so many people so close up can be draining.

To combat this, Headroom’s co-founders Julian Green, a former Google LLC executive, and Andrew Rabinovich, a former Magic Leap Inc. executive, used AI to allow people in the conference to meet “hands-free” by doing interactive transcriptions, automatically generating video highlights and providing gesture recognition to capture otherwise missed interactions.

Green told SiliconANGLE that what he saw in the industry involving video meetings within big companies is that people could get caught in a “sort of nightmare of back-to-back meetings.”

“Everyone’s calendars would end up especially stuffed full, especially a large campus such as Google,” he said. “You can’t really go between one meeting and another you can’t get across campus physically. So, you end up in your own meeting room or your desk. It’s really impersonal and you miss out on that really rich, in-person communication.”

One big way that Headroom helps with this is gesture recognition. In a video meeting, people’s video is stuck into little 2D boxes on the screen and it becomes difficult to see any sort of body language and physicality is reduced. To reduce those problems, the software watches for particular movements.

When people want to attract attention to themselves, they can raise their hand and the software recognizes that and creates an emoticon of a raised hand. That makes it much easier than attempting to attract attention by waving, which might be difficult to notice in a sea of faces, or waiting for a turn to speak without interrupting someone.

Other actions that the platform can recognize are thumbs up and thumbs down for votes — and facepalming, just in case someone needs to express embarrassment. The company plans to add more gestures to the system in the future.

Presenters also get useful tools that tell them how the audience is doing. For example, the host will be able to gauge at a glance who is talking the most during a session and use that to steer conversations to be more inclusive. This is another way that hosts can make certain that important people don’t get left out because certain employees might be dominating the meeting time.

Using real-time, interactive transcription, Headroom can allow participants to select transcribed text and turn it into notes for later. That also means that the meeting can be easily searched later by its attendees and those who couldn’t make it. Opening it up for greater information sharing opportunities.

“One of the aspects of meetings that we touched on is making them more effective during meetings and part of meetings is sharing information with each other and hopefully walk away with some new knowledge,” said Rabinovich. “But typically when meetings end that knowledge is vastly lost. Maybe some notes might be written down but often most information is lost and out of context. So the second part of Headroom is to add as much of this to a giant knowledge base so that it can be queried after the meeting is over.”

Entire meetings, and their transcripts, are archived and searchable by both participants and people who did not attend. That includes the notes created during the meeting. It’s also possible to rewind or fast-forward to any point in the meeting by clicking on the transcript in order to listen to that portion of the meeting and catch up.

Using the AI, Headroom also watches for different types of engagement during the meeting by doing an analysis of participants’ gestures and the transcription text. It uses this to produce highlights of the meeting using natural language programming that has been trained on hundreds of thousands of minutes of human-tagged important moments in meetings.  These “highlight reels” can be used by people to get a good summary in a recap lasting a few minutes for a meeting that could have lasted an hour.

“It’s like you watching a TV show and instead of re-watching the entirety of an episode it says ‘previously on X,’” Rabinovich said about the system. “You get to watch a minute of it and you’re ready for the next one.”

This feature saves time and energy for co-workers who would otherwise need to summarize the meeting. People can still watch the entire meeting or read the AI-generated transcript, should they choose, but the highlights exist to help provide a quick at-a-glance summary.

Combining all of these features — AI-transcription, archiving and the ability to search past meetings transcripts and highlights — Headroom would also help to prevent duplication of work with meetings, Green said.

Headroom’s AI-enhanced video conferencing platform is available today for free with a signup on the company’s website.

Image: Headroom

A message from John Furrier, co-founder of SiliconANGLE:

Your vote of support is important to us and it helps us keep the content FREE.

One click below supports our mission to provide free, deep, and relevant content.  

Join our community on YouTube

Join the community that includes more than 15,000 #CubeAlumni experts, including Amazon.com CEO Andy Jassy, Dell Technologies founder and CEO Michael Dell, Intel CEO Pat Gelsinger, and many more luminaries and experts.

“TheCUBE is an important partner to the industry. You guys really are a part of our events and we really appreciate you coming and I know people appreciate the content you create as well” – Andy Jassy

THANK YOU