While NSFW AI systems are built to recognize NSFW or adult content, tackling real-time threats is still an evolving technology. All research finds that NSFW AI is good at finding explicit content in still frames and pre-recorded videos but not as great when it comes to real-time threat detection, as of 2023. A Stanford University study includes that the complexity of human behavior and unpredictability of online interactions is a reason why real-time systems relying on AI can’t instantly detect live threats. Those systems work with patterns and keywords recognised by algorithms, so they cannot respond responsively in the real-time process of the call.
While the ITU report from 2022 which highlighted this challenge for AI in the real-time domain stated that “AI systems like those used to monitor social media or video streaming platforms generally lag by up to 10 minutes when detecting explicit material” The delay introduced by cloud computing makes real-time ML threat detection untrustworthy, particularly in scenarios where a need for swift action arises — like a live-stream platform or video call.
Twitch, a popular live-streaming platform, for example has begun using AI to automatically moderate content in real-time but inappropriate behavior or explicit sexual content still gets through. According to an internal report released last year by the gaming platform, automated systems had only been quick in identifying 30 per cent of problem content – whereas around 85 per cent of problematic content was identified automatically. That delay might be vital for avoiding damage or to swiftly act in response to potential risks.
NSFW AI systems are becoming better at classifying content, but they struggle with the context of live interactions. AI systems excel at analyzing static content but are notoriously missing the situational awareness and adaptability to be able to respond in a complex and timely manner in real-time scenarios. Dr. Timnit Gebru, AI ethics expert says: “AI systems are only as good as the data we’re trained on. NEW 🆕 – “In live interactions, behaviours and threats can change very quickly but AI often fails to cope https://cio.co.uk/cio-in-foc…
If users talk about live threats, can NSFW AI detect that? The answer is sort of, but not with the accuracy needed to intervene on time. If anyone wants a more in-depth explanation of these systems, there are solutions like nsfw ai that are quickening the response time as well as accuracy within a real-world setting. Nonetheless, human involvement is required to respond to advanced, imminent risks.