Google’s Gemini team is apparently sending out emails about an upcoming change to how Gemini interacts with apps on Android devices. The email informs users that, come July 7, 2025, Gemini will be able to “help you use Phone, Messages, WhatsApp, and Utilities on your phone, whether your Gemini Apps Activity is on or off.” Naturally, this has raised some privacy concerns among those who’ve received the email and those using the AI assistant on their Android devices.

  • Tja@programming.dev
    link
    fedilink
    English
    arrow-up
    14
    arrow-down
    2
    ·
    1 day ago

    It’s mostly lemmy. In real life people go from amused to indifferent. I have never met anyone as hostile as the lemmy consensus seems to be. If a feature is useful people will use it, be it AI or not AI. Some AI features are gimmicks and they largely get ignored, unless very intrusive (in which case the intrusivity, not the AI, is the problem).

    • jjjalljs@ttrpg.network
      link
      fedilink
      English
      arrow-up
      8
      ·
      16 hours ago

      If a feature is useful people will use it, be it AI or not AI.

      People will also use it if it’s not useful, if it’s the default.

      A friend of mine did a search the other day to find the hour of something, and google’s AI lied to her. Top of the page, just completely wrong.

      Luckily I said, “That doesn’t sound right” and checked the official site, where we found the truth.

      Google is definitely forcing this out, even when it’s inferior to other products. Hell, it’s inferior to their own, existing product.

      But people will keep using AI, because it’s there, and it’s right most of the time.

      Google sucks. They should be broken up, and their leadership barred from working in tech. We could have had a better future. Instead we have this hallucinatory hellhole.

      • ScoffingLizard@lemmy.dbzer0.com
        link
        fedilink
        English
        arrow-up
        2
        ·
        edit-2
        6 hours ago

        They need a tech ethics board, and people need a license to operate or work in decision-making capacities. Also, anyone above the person’s head making an unethical decision loses their license, too. License should be cheap to prevent monopoly, but you have to have one to handle data. Don’t have a license. Don’t have a company. Plant shitty surveillance without separate, noticeable, succinctly presented agreements that are clear and understandable, with warnings about currently misunderstood uses, then you lose license. First offense.

        Edit: Also mandatory audits with preformulated and separate, and succint notifications are applied. “This company sells your info to the government and police forces. Any private information, even sexual in nature, can be used against you. Your information will be used by several companies to build your complete psychological profile to sell you things you wouldn’t normally purchase and predict crimes you might commit.”

      • Tja@programming.dev
        link
        fedilink
        English
        arrow-up
        1
        arrow-down
        2
        ·
        10 hours ago

        How are you evaluating inferior? I like the AI search. It’s my opinion. You have yours.

        • jjjalljs@ttrpg.network
          link
          fedilink
          English
          arrow-up
          1
          ·
          2 hours ago

          Well, in this example, the information provided by the AI was simply wrong. If it had done the traditional search method of pointing to the organization’s website where they had the hours listed, it would have worked fine.

          This idea that “we’re all entitled to our opinion” is nonsense. That’s for when you’re a child and the topic is what flavor Jelly Bean you like. It’s not for like policy or things that matter. You can’t just “it’s my opinion” your way through “this algorithm is O(n^2) but I like it better than O(n) so I’m going to use it for my big website”. Or more on topic, you can’t use it for “these results are wrong but I like them better”

    • brbposting@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      2
      ·
      13 hours ago

      I imagine even the fk_ai crowd appreciate the non-gimmick stuff as long as it is nothing like a chatbot

      Tiny example from Gmail:

      This is all over, and it can be super useful from time to time.

      They say “f AI!” but I mean sure they don’t want better searches than were possible five years ago? If it’s not sycophantic and confabulatory etc. etc.

      Good point on intrusivity

      PS

      PS: I translated news from Iran this week using AI tools and using traditional translators. Who would advocate for the garbage traditional translation—soon as I went the “AI” route, it was suddenly possible to understand what the journalists were trying to say. That doesn’t mean I want translators to lose their jobs, it just means I know what the best available technology is and how to use it to get a job done. (And does not mean just because it translates well that I will also trust it to summarize the article for me.)

    • dependencyinjection@discuss.tchncs.de
      link
      fedilink
      English
      arrow-up
      7
      arrow-down
      4
      ·
      1 day ago

      It’s one of the reasons I use Lemmy a little less these days as it’s evident to me that it’s an echo chamber for a tiny subset of humanity and at times it just feels like a circle jerk where real change isn’t an option.