lilyball 21 hours ago

If you enable a feature that summarizes messages, then messages get summarized. Summarizing messages requires rewriting them. Where is the surprise here? Not only that, but the moment you click on the summary, you see the whole thing. Whether that's clicking the email summary to view the email, or clicking a summarized notification stack to expand it and see the notifications. The summary doesn't replace the message, it just lets you know whether it's something you want to click on.

  • kalleboo 20 hours ago

    The summary of a spam message is "this is a spam message", not "you've gotten the inheritance of a Nigerian prince".

    We're being told that AIs are basically as smart as humans, that AGI is arriving this year and all our jobs are on the line. The average human assistant would not get this assignment wrong.

    • johann8384 4 hours ago

      The average human assistant would absolutely get this wrong. I've been absolutely amazed at how vulnerable the average human is.

      One example, a city manager sent $150k via wire transfer to a foreign account because a phishing email told them to. 1) The city didn't typically pay things by wire transfer 2) The amount was much more than would normally be handled by this person's role alone 3) The process for paying something required an invoice and approvals, an email should never have been enough to trigger this response.

      • jrflowers 2 hours ago

        >The average human assistant would absolutely get this wrong

        “The average human falls for scam texts” is a funny thing to make up in pursuit of defending software that falls for scam texts. Scammers send out these texts by the thousands because the average human does not fall for scam texts.

        Most people are average, which would mean most people that you know have fallen for scam texts if your position were true. Is it true in your social circle that most people you know have fallen for scam texts and given the scammers money?

    • lilyball 14 hours ago

      It's not the job of a summary model to identify spam. The summary model runs after junk filtering, so everything that is given to the summary model is, by definition, already classified as "not spam". A summary model that decides the thing it's supposed to be summarizing is spam is a broken summary model.

      • alt227 12 hours ago

        If a summary model cannot summarise spam as 'spam' then its not a very good summary model.

        • lxgr 9 hours ago

          What’s spam to me is a good deal or important information to somebody else. This is a text summarization model, not a personal assistant.

          • ffsm8 9 hours ago

            While that's technically true for theoretical other cases, I sincerely doubt anyone could argue that this applies to this particular example.

            Is your argument that it cannot determine obvious spam - because there are people thatd classify some authentic emails as spam?

            • lxgr 8 hours ago

              If something can be obviously determined as spam, it shouldn't make it through the spam filter, which runs way before this model.

              If the spam detection model can't, e.g. because doing so would require more context on the user and more capabilities, I don't think a summarization model would be able to help, and you'd need something more like a "personal assistant" model.

              That said, I don't think we are too far away from that, but this particular model is not that.

    • matthewdgreen 19 hours ago

      From my perspective it doesn't matter if AIs are smarter than human assistants. The problem here is that AI summarization is predictable. Spammers will run the summarization tooling on their own AI-generated messages until they find candidates that don't get recognized as spam. No matter how smart the AI is, they'll eventually beat it. I'm sure the results are going to be super fascinating.

      • cuteboy19 18 hours ago

        Apple intelligence summarization is certainly not predictable. it can do some really funny things

        • matthewdgreen 8 hours ago

          Most of these systems are randomized, but given sufficient oracle access to the APIs you should be able to detect which messages are likely to get summarized as non-spam.

    • LocalH 8 hours ago

      > We're being told that AIs are basically as smart as humans, that AGI is arriving this year and all our jobs are on the line.

      Two of those are lies writ large, and I hope that most here understand which two.

    • rafram 18 hours ago

      The on-device model that Apple is using for this isn’t billed as “basically as smart as humans,” to be fair. It’s likely just a small text summarization model, not a foundation model like GPT-*.

      • kalleboo 16 hours ago

        I think the marketing is deliberately vague. The feature is called "intelligence" but it's literally not intelligent.

        • Crosseye_Jack 11 hours ago

          You’re reading the marketing copy wrong!

          It’s clearly as intelligent as an Apple, at least on par with a Granny Smith!

    • scarface_74 19 hours ago

      Even worse, we have done a good job of detecting spam before AI was a thing.

    • lxgr 10 hours ago

      That’s not a summary, that’s an interpretation/editorialization.

      • kalleboo 9 hours ago

        Every summary is an interpretation/editorialization. That's what happens when you have to pick out what is important or not. A poor summary of a text picks out and presents the unimportant bits.

        • lxgr 8 hours ago

          Yes, but there are degrees/layers to this, and having different words for each makes sense.

          At a very very high level, a spell checker and an editor advising on your target audience's aesthetic preferences and cultural nuances are the same thing; at the level we're talking about here the distinction becomes important.

          Imagine asking an editor how to spell "xenomorph" and their feedback is to tell you to rewrite your story in a fantasy setting instead.

    • rsynnott 7 hours ago

      > We're being told that AIs are basically as smart as humans, that AGI is arriving this year and all our jobs are on the line.

      I mean, not by anyone worth listening to, tho.

  • xtagon 21 hours ago

    Not all rewriting and not all summarization is the same, and the surprising part is that it often makes it seem more legitimate. There's no reason, for example, that it couldn't rephrase it in a way that conveys it as suspicious.

  • jrflowers 20 hours ago

    I like the idea that something has to be surprising rather than factual to be “news”.

    A dispassionate and accurate description of a new feature that reaches millions of users? Not news. The lead singer of Limp Bizkit having an unexpected cameo in a recent movie where he sings an acoustic George Michael cover? That’s news baybee!

  • eviks 17 hours ago

    > Summarizing messages requires rewriting them. Where is the surprise here?

    You need to move from describing the process to reviewing the content. If rewriting makes it harder to see it's a scam when you expected the opposite, here is your surprise

  • miohtama 21 hours ago

    The surprise is in how to make clickbait titles that sell on the AI fear

    • lilyball 21 hours ago

      Yeah, to be honest, the only reason I wrote the comment was because my understanding is HN stories whose comments/points ratio is higher get downranked and I'm tired of seeing clickbait artificial outrage stories.

      • koolala 20 hours ago

        A human summary would say " This smells like BS". This article though smells legit. This is an actual problem that it doesn't seem intelligent enough to handle.

        • gabeio 19 hours ago

          Very few if any of the AIs have reasoning yet. Yes, a human would be able to think through and tell you it’s spam or better yet not even bother telling you about the message _because_ it is spam but we aren’t quite there yet.

          • dpig_ 19 hours ago

            My mom's iPhone should not be summarising spam messages if we "aren't quite there yet."

            • gabeio 18 hours ago

              Considering the spam message could be written better using AI I don’t really see how a bad summary makes that much worse. Spammers can make their messages better before they were sent using AI. If you think your mom is at risk maybe that’s the issue that needs to be solved separately.

              This isn’t it rewriting the message only the notification. And the feature is disable-able and not on by default from what I found since I had to go look for it to enable it.

              • jrflowers 14 hours ago

                >If you think your mom is at risk maybe that’s the issue that needs to be solved separately.

                This is a good point. The rise of chat gpt means that only the most savvy should use tech

    • cuteboy19 18 hours ago

      the point is that Apple intelligence is bad and not that AI is bad

the_snooze 20 hours ago

I feel like the whole "notification summary" use case addresses the wrong problem. It's true that a lot of people are innundated in notifications of varying levels of importance. But I don't think the solution is to condense those diverse notifications and lose a lot of context in the process. A better solution is to reduce notifications altogether, so that the problem remains human-scaled and leaves the user in charge.

Maybe an AI (or even a simple statistical model) can suggest group chats to suppress notifications for, based on how frequently the user actually reads and engages with them. Maybe the notifications system can be overhauled altogether so that non-DM uses are severely restricted by default (e.g., a news app can only display one notification at a time).

The problem with too many notifications is too many notifications, not the user wishing they had the motivation to read all of them.

  • ninkendo 19 hours ago

    I think I basically agree.

    My policy is basically: people can contact me, computers can’t.

    The only notifications I allow are for apps that are a proxy for human communication: messages, email (with a heavily curated inbox in which I dutifully unsubscribe from everything that’s not something I want), and very little else.

    The only time I want a notification is if some human being thought to themself “I need to talk to ninkendo right now, and I want him to be notified”. No automated systems, no mass communications.

    I’ve never found any system to be of any help with this other than just disabling notifications for every single thing other than these 2-3 apps. No “focus modes”, no auto-summaries, etc. None of them are as effective as just not having the notifications in the first place.

    • 1123581321 18 hours ago

      Focus modes are still nice for letting different groups of people contact you, though.

rubslopes 21 hours ago

This year's marketing trend: crafting copy in a way that it is prioritized by Apple Intelligence.

  • kcplate 21 hours ago

    I’ve been saying over the last year to our marketing team that we should put some effort into better understanding how AI reads and understands our website copy and email copy so we can optimize there.

  • pingers123 19 hours ago

    this year's scamming trend: crafting scams in a way that manipulates Apple Intelligence

egberts1 7 hours ago

Original: "my dear John Smith. My name is Prince Abiodun Adosina. A fortune is awaiting you but it’s locked in an overseas account"

Apple AI: "Dear John Smith, Prince Abiodum Adosina is me. A significant amount of my overseas saving stands to flow into your bank account. All that is required of you to unlock your portion toward you is to ..."

qintl55 a day ago

after years of teach humans to identify and avoid scams, we now have to teach AI to do this. le sigh

sinuhe69 16 hours ago

They aim for the low-hanging fruits like summarizing, thinking it could not go wrong. Oh boys, how wrong they could be! If they aim high like Jobs always war, Apple Intelligence could actually provide values instead of being an annoyance.

jaredsohn 20 hours ago

Surprised to see no mention of how this impacts people who purposefully write scam messages poorly to filter out people are less likely to get conned. Curious if this starts making traditional scammers less efficient.

_boffin_ 5 hours ago

Has anyone extracted the promos for the message summary feature yet?

bobheadmaker 5 hours ago

Well, that is some good use case they found :P

ytch 21 hours ago

It's just garbage in, garbage out. AI isn't the safeguard of everything.

cyberge99 18 hours ago

Every AI would do this if you asked it to.

cadamsau 16 hours ago

Soooo it does what it’s supposed to?

whatever1 a day ago

Is it that bad because it is a smaller model compared to the sota?

  • phire 21 hours ago

    It's not related to the size of the model.

    It's more of a prompting/design issue. The LLM has been told to summarize the message, not to identify it as a scam or not.

    • lilyball 21 hours ago

      Nor should it. Scam identification is a feature of junk mail filtering, not of summarization. Any scam that's being summarized is one that has already made it past junk filtering.

      • somebehemoth 21 hours ago

        As it made it past other filtering it is more dangerous. The last possible protection is the user's attention. If the user reads a coherent summary and then opens the message, might they be more primed to treat the message as legitimate?

        I agree with you about other filtering being critical and more appropriate. I understand Apple made a good feature. I don't actually know if it will have a negative side effect. I do think it is a good idea to examine the feature critically due to the scale of Apple and the trust its users place in the company to protect them from harm. I also do not think the title of this article is very reasonable bc it insinuates something sinister.

      • EForEndeavour 21 hours ago

        I'd expect a suite of features explicitly designed to mimic human intelligence to know, like a dutiful assistant, that if I give them access to my inbox and ask them to "summarize" my new messages, and they see a scam message, they notice and delete it rather than pass it through to me like an idiot. We're supposed to be moving away from the era of computers dumbly following explicit instructions and into the era where "AI" finally delivers on the hollow promises of Siri and Alexa.

        • phire 20 hours ago

          And that's a problem with human expectations not lining up with the reality of LLMs.

          You could actually implement such functionality with current LLMs. Even one small enough to run on a phone.

          But you can't implement it well enough to be trustworthy. It will make mistakes, and people will quickly stop trusting it. Even if it was as good as a true human assistant (which it's not), humans still make mistakes. We have a tendency to be forgiving of the mistakes that humans make, but we expecit AI to be near perfect and will judge it far more harshly... Hell, just look at how people are blaming it for summarizing a phishing scam.

          This is before you even consider the potential of prompt injection attacks. If you give the LLM the power to delete emails, it will be vulnerable to people sending emails telling it to delete emails. A job applicant might be able to tell apple intelligence to delete all other applications.