Unitary, an EF alumnus, raises £1.3M seed for its content moderation AI

Unitary, a startup that’s developing AI to automate material calmnes for” pernicious content” so that humans don’t have to, has picked up PS1. 35 million in funding. The busines is still in development mode but launched a visitation of its engineering in September.

Led by Rocket Internet’s GFC, the seed round also includes backing from Jane VC( the cold email-friendly firm backing female-led startups ), SGH Capital, and a number of unnamed angel investors. Unitary had previously promoted pre-seed funding from Entrepreneur First, as an alumnus of the company developer program.

” Every minute, over 500 hours of new video footage are uploaded to the internet, and the magnitude of disturbing, abusive and murderou material that is put online is quite astonishing ,” Unitary CEO and co-founder Sasha Haco, who previously worked with Stephen Hawking on black holes, tells me.” Currently, the safety of the internet relies on armies of the human rights moderators who have to watch and take down inappropriate material. But humans cannot perhaps keep up “.

Not exclusively is the volume of content uploaded ever-increasing, but the people employed to moderate the contents on pulpits like Facebook can suffer seriously.” Repeated exposure to such disturbing footage is leaving numerous moderators with PTSD ,” says Haco.” Regulations are responding to this crisis and putting increasing influence on platforms to deal with pernicious content and protect our children from the worst of the internet. But currently, there is no adequate solution “.

Which, of course, is where Unitary wants to step in, with a territory mission to” impel the internet a safer locate” by automatically detecting harmful content. Its proprietary AI technology, which gives “state of the art” computer perception and graph-based techniques, claims to be able to recognise harmful material at the point of upload, including” reading context to tackle even the more nuanced videos ,” justifies Haco.

Meanwhile, although there are already several answers offered to developers that can detect inhibited material that is more obvious, such as explicit nudity or extreme brutality( AWS, for example, has one such API ), the Unitary CEO argues that none of these are remotely good enough to” indeed disposses human collaboration “.

” These arrangements fail to understand more insidious practices or mansions, especially on video ,” she says.” While current AI can consider well with short video clips, longer videos still ask humans in order to understand them. On top of this, it is often the context of the upload that makes all the difference to its meaning, and it is the ability to incorporate contextual understanding that is both extremely challenging and fundamental to calmnes. We are tackling each of these core issues in order to achieve a technology that will, even in the near term, massively cut down on the level of human involvement required and one day achieve a much safer internet “.

Read more: feedproxy.google.com

No Luck
No prize
Get Software
Almost!
Free E-Book
Missed Out
No Prize
No luck today
Almost!
Free eCourse
No prize
Enter Our Draw
Get your chance to win a prize!
Enter your email address and spin the wheel. This is your chance to win amazing discounts!
Our in-house rules:
  • One game per user
  • Cheaters will be disqualified.