More than half of Americans reported receiving at least one scam call per day in 2024. To combat the rise of sophisticated conversational scams that deceive victims over the course of a phone call, we introduced Scam Detection late last year to U.S.-based English-speaking Phone by Google public beta users on Pixel phones.

We use AI models processed on-device to analyze conversations in real-time and warn users of potential scams. If a caller, for example, tries to get you to provide payment via gift cards to complete a delivery, Scam Detection will alert you through audio and haptic notifications and display a warning on your phone that the call may be a scam.

  • 𝕸𝖔𝖘𝖘@infosec.pub
    link
    fedilink
    English
    arrow-up
    7
    ·
    6 hours ago

    In some countries and, (if not mistaken) states in USA, if an AI is listening to a conversation, both parties must be made aware. If they don’t notify the other end, they’ll be violating regulations. Privacy erosion and manipulation likelihood aside, this is a terrible idea.

      • 𝕸𝖔𝖘𝖘@infosec.pub
        link
        fedilink
        English
        arrow-up
        2
        ·
        edit-2
        6 hours ago

        Who? Google? Google won’t even get slapped on the wrists. I’m talking about the users using this (unwittingly or otherwise, the law doesn’t care). Even if they don’t care about the privacy implications nor the abuse of the tech, they are opening themselves up for some serious liabilities.

        Edit: mistype