I cannot follow this post at all. Where does $10B come from? It's never mentioned aside from the tagline.
I'm also trying to understand what OASIS was really supposed to do that was going to.... uh... matter? It's a video chat app where you can be someone else in the video. Ok, that's cool but I'm failing to see how this is groundbreaking.
> Her: "Wait, haven't we banned you from the App Store? Why haven't we killed your company already?"
> Me: "We... haven't exactly told anyone at Apple about this."
> Her: "You're a mosquito. Apple will just stomp on you and you will not exist."
Told Apple what? That they have a bug? Why would they ban you from the app store? Why would someone say "You're a mosquito. Apple will just stomp on you and you will not exist.", it makes zero sense to me given the context laid out here.
Lastly, did Apple fix the problem? They made changes but we won't know anything for sure until next Friday at the very earliest.
Seems like a lot of name dropping (why should I care about a big name that didn't invest in you?) and big numbers ($10B, never explained) for a failed startup.
> You can be right about the future and still fail in the present.
Not clear at all what OASIS was "right" about really.
> Apple's A19 Pro isn't just a chip announcement. It's a confession. An admission. A vindication.
Ok, sure. If you say so.
Lastly, what were you "right" about? That iPhones can get hot?
Just none of this makes any sense or seems very interesting IMHO.
> Why would someone say "You're a mosquito. Apple will just stomp on you and you will not exist.", it makes zero sense to me given the context laid out here.
I'm telling you what I was told. It's a true story. I was there. It happened to me.
I understood it to be a throwaway estimate for the cost of apple building out a specialized chip architecture that can handle excessive workloads from transformer based AI apps.
We posted the same thing, in essence, at the same time. This piece is completely nonsensical in every way, and I presume it is targeted at laymen who'll just go along with it. Like anyone who sees that last bit about MLX and CoreML and doesn't realize the author seems to not have a clue what they're talking about should understand they're being duped.
Apple adopted a new cooling technique on their highest end device to differentiate and give spec sheet chasers something to be hyped about. It should help reduce throttling for the very odd event where someone is running a mobile device at 100% continuously (which is actually super rare in normal usage). It's already in the Pixel 9 Pro, for instance, and is a new "must have". It has nothing to do with whatever app these guys were building.
The rest of the nonsense is just silly. If you are building an app for a mobile device and it pegs the CPU and GPU, you're going to have a bad time. That's the moment you realize it's time to go back to the drawing board.
Our app wasn't running on CPU or GPU –– the actual software we built was running entirely on Apple Neural Engine and it was crazy fast because we designed the architecture explicitly to run that specific chip.
We were just calling the iPhone's built-in face tracking system via the Vision Framework to animate the avatars. That's the thing that was running on GPU.
Okay, though I'm not sure what that has to do with my comment. I understood that from the post: you were concurrently maxing out multiple parts of the SoC and it was overheating as they all contributed to the thermal load. This isn't new or novel -- benchmarks that saturate both the CPU and GPU are legendary for throttling -- though the claim that somehow normal thermal management didn't protect the hardware is novel, albeit entirely unsubstantiated.
That is neither here nor there on CoreML -- which also uses the CPU, GPU, and ANE, and sometimes a combination of all of them -- or the weird thing about MLX.
I don't get what's so weird about MLX. Apple's focus is obviously on MLX / Metal going forward.
The only reason to use CoreML these days is to tap into the Neural Engine. When building for CoreML, if one layer of your model isn't compatible with the Neural Engine, it all falls back to the CPU. Ergot, CoreML is the only way to access the ANE, but it's a buggy all-or-nothing gambit.
Have you ever actually shipped a CoreML model or tried to use the ANE?
>Apple's focus is obviously on MLX / Metal going forward.
This is nonsensical.
MLX and CoreML are orthogonal. MLX is about training models. CoreML is about running models, or ML-related jobs. They solve very different problems, and MLX patches a massive hole that existed in the Apple space.
Anyone saying MLX replaces CoreML, as the submission does, betrays that they are simply clueless.
>The only reason to use CoreML these days is to tap into the Neural Engine.
Every major AI framework on Apple hardware uses CoreML. What are you even talking about? CoreML, by the very purpose of its design, uses any of the available computation subsystems, which on the A19 will be the matmul units on the GPU. Anyone who thinks CoreML exists to use the ANE simply doesn't know what they're talking about. Indeed, the ANE is so limited in scope and purpose that it's remarkably hard to actually get it to use the ANE.
>Have you ever actually shipped a CoreML model or tried to use the ANE?
Literally a significant part of my professional life, which is precisely why this submission triggered every "does this guy know what he's talking about" button.
Yes, MLX is for research, but MLX-Swift is for production and it works quite well for supported models! Unlike CoreML, the developer community is vibrant and growing.
Maybe I am working on a different set of problems than you are. But why would you use CoreML if not to access ANE? There are so many other, better newer options like llama.cpp, MLX-Swift, etc.
What are you seeing here that I am missing? What kind of models do you work with?
I know what MLX is. MLX-swift is just a more accessible facade, but it's still MLX. The entire raison d'être for MLX is training and research. It is not a deployment library. It has zero intention in being a deployment library. Saying MLX replaces CoreML is simply nonsensical.
> But why would you use CoreML if not to access ANE?
The whole point of CoreML is hardware agnostic operations, not to mention higher level operations for most model touchpoints. If you went into this thinking CoreML = ANE, that's just fundamentally wrong at the beginning. ANE is one extremely limited path for CoreML models. The vast majority of CoreML models will end up running on the GPU -- using metal, it should be noted -- aside from some hyper-optimized models for core system functions, but if/when Apple improves the ANE, existing models will just use that as well. Similarly when you run a CoreML model on an A19 equipped unit, it will use the new matmul instructions where appropriate.
That's the point of CoreML.
Saying other options are "better, newer" is just weird and meaningless. Not only is CoreML rapidly evolving and can support just about every modern model feature, in most benchmarks of CoreML vs people's hand-crafted metal, CoreML smokes them. And then you run it on an A19 or the next M# and it leaves them crying for mercy. That's the point of it.
Can someone hand craft some metal and implement their own model runtime? Of course they can, and some have. That is the extreme exception, and no one in here should think that has replaced anything
It sounds like your experience differs from mine. I oversaw teams trying to use CoreML in the 2020 - 2024 era who found it very buggy, as per the screenshots I provided.
More recently, I personally tried to convert Kokoro TTS to run on ANE. After performing surgery on the model to run on ANE using CoreML, I ended up with a recurring Xcode crash and reported the bug to Apple (as reported in the post and copied in part below).
What actually worked for me was using MLX-audio, which has been great as there is a whole enthusiastic developer community around the project, in a way that I haven't seen with CoreML. It also seems to be improving rapidly.
In contrast, I have talked to exactly 1 developer who have ever used CoreML since ChatGPT launched, and all that person did was complain about the experience and explain how it inspired them to abandon on-device AI for the cloud.
___
Crash report:
A Core ML model exported as an `mlprogram` with an LSTM layer consistently causes a hard crash (`EXC_BAD_ACCESS` code=2) inside the BNNS framework when `MLModel.prediction()` is called. The crash occurs on M2 Ultra hardware and appears to be a bug in the underlying BNNS kernel for the LSTM or a related operation, as all input tensors have been validated and match the model's expected shape contract. The crash happens regardless of whether the compute unit is set to CPU-only, GPU, or Neural Engine.
*Steps to Reproduce:*
1. Download the attached Core ML models (`kokoro_duration.mlpackage` and `kokoro_synthesizer_3s.mlpackage`)
2. Create a new macOS App project in Xcode. Add the two `.mlpackage` files to the project's "Copy Bundle Resources" build phase.
3. Replace the contents of `ContentView.swift` with the code from `repro.swift`.
4. Build and run the app on an Apple Silicon Mac (tested on M2 Ultra, macOS 15.6.1).
5. Click the "Run Prediction" button in the app.
*Expected Results:*
The `MLModel.prediction()` call should complete successfully, returning an `MLFeatureProvider` containing the output waveform. No crash should occur.
*Actual Results:*
The application crashes immediately upon calling `model.prediction(from: inputs, options: options)`. The crash is an `EXC_BAD_ACCESS` (code=2) that occurs deep within the Core ML and BNNS frameworks. The backtrace consistently points to `libBNNS.dylib`, indicating a failure in a low-level BNNS kernel during model execution. The crash log is below.
I can't speak to how CoreML worked for you, or how the sharp edges cut. I triply wouldn't comment on ANE, which is an extremely limited bit of hardware mostly targeted at energy efficient running of small, quantized models with a subset of features. For instance extracting text from images.
CoreML is pervasively used throughout iOS and macOS, and this is more extensive than ever in the 25 versions. Zero percent of the system uses MLX for the runtime. The incredibly weird and nonsensical submissions weird contention that because ANE doesn't work for them, therefore Apple is admitting something is just laughable silliness.
And FWIW, people's impressions of the tech world from their own incredibly small bubble is often deeply misleading. I've read so many developers express with utter conviction that no one uses Oracle, no one uses Salesforce, no one uses Windows, no one uses C++, no one uses...
Keeping the apple denialism aside, one startup finds a bug in the software, The company doesn’t want to admit it (like antennaGate) and then goes about solving the problem.
It strikes me as troublesome that a company that found a bug could be banned from the App Store and the rep talks about it as killing the company.
> It strikes me as troublesome that a company that found a bug could be banned from the App Store and the rep talks about it as killing the company.
Yes, all of that would be troublesome... if it were true. Given the rest of the post's content I'm leaning pretty heavily towards "made up". This whole thing reads like "Am I the Asshole" or similar subreddits which are 99% outlets for fiction writers.
The vapor does recondense into a liquid which is why they usually have a very rough and porous inner surface texture. But you're right that there isn't meaningful circulation, just convection.
Summary: They made an iOS app that was slow, and the iPhone got hot. This generation of iPhone is faster, and adopted a vapor chamber heat spreader. Our app runs faster and the phone doesn't get as hot. Therefore, by correlation, our app caused the new iPhone.
This is such an odd submission, and a lot of the claims are bizarre and seemingly nonsensical. Maybe I'm just misunderstanding. Exchanges in it seem remarkably...fictional. It reads like a tosser peacocking LinkedIn post by someone desperately trying to scam some rubes.
It also seems like one of those self-aggrandizing things that tries to spin everything as a reaction to themselves, instead of just technology progressing. No, vapour chamber cooling isn't some grand admission, it's something that a variety of makers have been adopting to reduce throttling as a spec-sheet item of their top end devices. It isn't all about you.
And given that the base 17 doesn't have VCC, I guess Apple isn't "admitting" it at all, no?
And the CoreML v MLX nonsense at the end is entirely nonsensical and technically ignorant. Like, wow.
No one should learn anything from this piece. The author might know what they're talking about (though I am doubtful), but this pieces was "how to make an Apple event about ourselves" and it's pretty ridiculous.
> And given that the base 17 doesn't have VCC, I guess Apple isn't "admitting" it at all, no?
It will be fun to see how hot the iPhone Air gets since it has the same chip as the 17 Pro (w/ one fewer GPU core), but a less thermally conductive metal and no vapor chamber.
I imagine it will be a lot like the MacBook Air, in that it just thermal throttles faster. It has the same chip as the Pro but will never seen the same _sustatined_ performance.
This will be one of those situations where we'll really miss Anandtech. Still can't believe that site died.
In the real world I doubt anyone will ever notice the difference, VCC or not. VCC only will materially affect usage when someone is doing an activity that will hit throttling, which is actually extraordinarily rare in normal use, and usually only comes into play in benchmarking. The overwhelming majority of time we peg those cores for a tiny amount of time and get a quick Animoji or text-extraction from an image, and so on. Even the "AI" usage on a mobile device is extremely peaky.
tl;dr This guy's AI software couldn't run on iPhones without damaging them. That's an iOS/iPhone bug. He believes Apple just put in a more advanced cooling solution into their latest phones because new AI software requires more cooling for the chips.
The whole thing is written in a bombastic storytelling style that is typical of LinkedIn threads. If this is entertaining to you, this is the link for you since it has actual image examples of their model output varying between platforms.
I might be wrong here, but based on the context, I read the "I don't want to know about your names or your company" as CYA for both parties: "I can't be legally obligated to disclose things I don't know, so please don't tell me anything you don't want Apple to know. I can't say that outright, though, so instead I'm going to say something that could be reasonably read as 'I'm busy and don't want to hear your pitch'".
The feeling I get reading that is that you shouldn't interpret that conversation as a verbatim quotation. Even though the author did everything possible to make it seem like it was verbatim; a hunch I get reading the entire thing is that its summarized and viewed through the author's eyes.
E.g. maybe they actually said some variation of "your app is bricking iPhones? how did you get through app store review..." and the author interpreted it as "squashing his company like a bug".
He did an hour of technical discussions, and found the issue. It’s not his fault apple makes him sign an employment secrecy contract so strict he has to be that careful or lose his job.
Unlike Apple's formal "developer evangelist" and several others I contacted, the guy actually took the time to talk to us, and I was/am grateful for that. He's a cog in a very large corporate machine. Apple is Apple. He's not the CEO. He was doing his job and did me a favor. I am grateful to him.
It's obviously exaggerated for humor. Has no one ever told anyone else a story before? No one talks like this.
"Off-the-books" meetings are just friend-based connections. I got a bug in the Linux nvidia driver fixed that was affecting me by just hitting up an old friend. I could write that story as him saying "Keep this top-secret" if I wanted because that's just fun storywriting.
I mean, to be fair, these events happened several years ago. My memory is as faulty as any other human being's, but as far as I remember, this is exactly what happened. These were very memorable events that I remember distinctly. It's possible my memory is distorted, but this is literally what I believe happened to the best of my ability.
Obviously, there was more to the conversation that what I wrote, but these are the actual words that I remember being said.
For more context, the PM at Apple in question was a former colleague of my then girlfriend. I reached out to her to have a friendly catch up. It wasn't positioned as meeting officially with Apple. I was literally just going through LinkedIn trying to figure out who I knew at Apple. So I hit her up on LinkedIn and asked to catch up, then told her about the situation. And this is how she responded. Worth noting: English is not her native language.
I cannot follow this post at all. Where does $10B come from? It's never mentioned aside from the tagline.
I'm also trying to understand what OASIS was really supposed to do that was going to.... uh... matter? It's a video chat app where you can be someone else in the video. Ok, that's cool but I'm failing to see how this is groundbreaking.
> Her: "Wait, haven't we banned you from the App Store? Why haven't we killed your company already?"
> Me: "We... haven't exactly told anyone at Apple about this."
> Her: "You're a mosquito. Apple will just stomp on you and you will not exist."
Told Apple what? That they have a bug? Why would they ban you from the app store? Why would someone say "You're a mosquito. Apple will just stomp on you and you will not exist.", it makes zero sense to me given the context laid out here.
Lastly, did Apple fix the problem? They made changes but we won't know anything for sure until next Friday at the very earliest.
Seems like a lot of name dropping (why should I care about a big name that didn't invest in you?) and big numbers ($10B, never explained) for a failed startup.
> You can be right about the future and still fail in the present.
Not clear at all what OASIS was "right" about really.
> Apple's A19 Pro isn't just a chip announcement. It's a confession. An admission. A vindication.
Ok, sure. If you say so.
Lastly, what were you "right" about? That iPhones can get hot?
Just none of this makes any sense or seems very interesting IMHO.
The post ends up somewhat of a caricature about how founders turn everything around them into something about them.
> Why would someone say "You're a mosquito. Apple will just stomp on you and you will not exist.", it makes zero sense to me given the context laid out here.
I'm telling you what I was told. It's a true story. I was there. It happened to me.
Why would I make up a detail like that?
I understood it to be a throwaway estimate for the cost of apple building out a specialized chip architecture that can handle excessive workloads from transformer based AI apps.
We posted the same thing, in essence, at the same time. This piece is completely nonsensical in every way, and I presume it is targeted at laymen who'll just go along with it. Like anyone who sees that last bit about MLX and CoreML and doesn't realize the author seems to not have a clue what they're talking about should understand they're being duped.
Apple adopted a new cooling technique on their highest end device to differentiate and give spec sheet chasers something to be hyped about. It should help reduce throttling for the very odd event where someone is running a mobile device at 100% continuously (which is actually super rare in normal usage). It's already in the Pixel 9 Pro, for instance, and is a new "must have". It has nothing to do with whatever app these guys were building.
The rest of the nonsense is just silly. If you are building an app for a mobile device and it pegs the CPU and GPU, you're going to have a bad time. That's the moment you realize it's time to go back to the drawing board.
Our app wasn't running on CPU or GPU –– the actual software we built was running entirely on Apple Neural Engine and it was crazy fast because we designed the architecture explicitly to run that specific chip.
We were just calling the iPhone's built-in face tracking system via the Vision Framework to animate the avatars. That's the thing that was running on GPU.
Okay, though I'm not sure what that has to do with my comment. I understood that from the post: you were concurrently maxing out multiple parts of the SoC and it was overheating as they all contributed to the thermal load. This isn't new or novel -- benchmarks that saturate both the CPU and GPU are legendary for throttling -- though the claim that somehow normal thermal management didn't protect the hardware is novel, albeit entirely unsubstantiated.
That is neither here nor there on CoreML -- which also uses the CPU, GPU, and ANE, and sometimes a combination of all of them -- or the weird thing about MLX.
I don't get what's so weird about MLX. Apple's focus is obviously on MLX / Metal going forward.
The only reason to use CoreML these days is to tap into the Neural Engine. When building for CoreML, if one layer of your model isn't compatible with the Neural Engine, it all falls back to the CPU. Ergot, CoreML is the only way to access the ANE, but it's a buggy all-or-nothing gambit.
Have you ever actually shipped a CoreML model or tried to use the ANE?
>Apple's focus is obviously on MLX / Metal going forward.
This is nonsensical.
MLX and CoreML are orthogonal. MLX is about training models. CoreML is about running models, or ML-related jobs. They solve very different problems, and MLX patches a massive hole that existed in the Apple space.
Anyone saying MLX replaces CoreML, as the submission does, betrays that they are simply clueless.
>The only reason to use CoreML these days is to tap into the Neural Engine.
Every major AI framework on Apple hardware uses CoreML. What are you even talking about? CoreML, by the very purpose of its design, uses any of the available computation subsystems, which on the A19 will be the matmul units on the GPU. Anyone who thinks CoreML exists to use the ANE simply doesn't know what they're talking about. Indeed, the ANE is so limited in scope and purpose that it's remarkably hard to actually get it to use the ANE.
>Have you ever actually shipped a CoreML model or tried to use the ANE?
Literally a significant part of my professional life, which is precisely why this submission triggered every "does this guy know what he's talking about" button.
Yes, MLX is for research, but MLX-Swift is for production and it works quite well for supported models! Unlike CoreML, the developer community is vibrant and growing.
https://github.com/ml-explore/mlx-swift
Maybe I am working on a different set of problems than you are. But why would you use CoreML if not to access ANE? There are so many other, better newer options like llama.cpp, MLX-Swift, etc.
What are you seeing here that I am missing? What kind of models do you work with?
I know what MLX is. MLX-swift is just a more accessible facade, but it's still MLX. The entire raison d'être for MLX is training and research. It is not a deployment library. It has zero intention in being a deployment library. Saying MLX replaces CoreML is simply nonsensical.
> But why would you use CoreML if not to access ANE?
The whole point of CoreML is hardware agnostic operations, not to mention higher level operations for most model touchpoints. If you went into this thinking CoreML = ANE, that's just fundamentally wrong at the beginning. ANE is one extremely limited path for CoreML models. The vast majority of CoreML models will end up running on the GPU -- using metal, it should be noted -- aside from some hyper-optimized models for core system functions, but if/when Apple improves the ANE, existing models will just use that as well. Similarly when you run a CoreML model on an A19 equipped unit, it will use the new matmul instructions where appropriate.
That's the point of CoreML.
Saying other options are "better, newer" is just weird and meaningless. Not only is CoreML rapidly evolving and can support just about every modern model feature, in most benchmarks of CoreML vs people's hand-crafted metal, CoreML smokes them. And then you run it on an A19 or the next M# and it leaves them crying for mercy. That's the point of it.
Can someone hand craft some metal and implement their own model runtime? Of course they can, and some have. That is the extreme exception, and no one in here should think that has replaced anything
It sounds like your experience differs from mine. I oversaw teams trying to use CoreML in the 2020 - 2024 era who found it very buggy, as per the screenshots I provided.
More recently, I personally tried to convert Kokoro TTS to run on ANE. After performing surgery on the model to run on ANE using CoreML, I ended up with a recurring Xcode crash and reported the bug to Apple (as reported in the post and copied in part below).
What actually worked for me was using MLX-audio, which has been great as there is a whole enthusiastic developer community around the project, in a way that I haven't seen with CoreML. It also seems to be improving rapidly.
In contrast, I have talked to exactly 1 developer who have ever used CoreML since ChatGPT launched, and all that person did was complain about the experience and explain how it inspired them to abandon on-device AI for the cloud.
___ Crash report:
A Core ML model exported as an `mlprogram` with an LSTM layer consistently causes a hard crash (`EXC_BAD_ACCESS` code=2) inside the BNNS framework when `MLModel.prediction()` is called. The crash occurs on M2 Ultra hardware and appears to be a bug in the underlying BNNS kernel for the LSTM or a related operation, as all input tensors have been validated and match the model's expected shape contract. The crash happens regardless of whether the compute unit is set to CPU-only, GPU, or Neural Engine.
*Steps to Reproduce:* 1. Download the attached Core ML models (`kokoro_duration.mlpackage` and `kokoro_synthesizer_3s.mlpackage`) 2. Create a new macOS App project in Xcode. Add the two `.mlpackage` files to the project's "Copy Bundle Resources" build phase. 3. Replace the contents of `ContentView.swift` with the code from `repro.swift`. 4. Build and run the app on an Apple Silicon Mac (tested on M2 Ultra, macOS 15.6.1). 5. Click the "Run Prediction" button in the app.
*Expected Results:* The `MLModel.prediction()` call should complete successfully, returning an `MLFeatureProvider` containing the output waveform. No crash should occur.
*Actual Results:* The application crashes immediately upon calling `model.prediction(from: inputs, options: options)`. The crash is an `EXC_BAD_ACCESS` (code=2) that occurs deep within the Core ML and BNNS frameworks. The backtrace consistently points to `libBNNS.dylib`, indicating a failure in a low-level BNNS kernel during model execution. The crash log is below.
I can't speak to how CoreML worked for you, or how the sharp edges cut. I triply wouldn't comment on ANE, which is an extremely limited bit of hardware mostly targeted at energy efficient running of small, quantized models with a subset of features. For instance extracting text from images.
CoreML is pervasively used throughout iOS and macOS, and this is more extensive than ever in the 25 versions. Zero percent of the system uses MLX for the runtime. The incredibly weird and nonsensical submissions weird contention that because ANE doesn't work for them, therefore Apple is admitting something is just laughable silliness.
And FWIW, people's impressions of the tech world from their own incredibly small bubble is often deeply misleading. I've read so many developers express with utter conviction that no one uses Oracle, no one uses Salesforce, no one uses Windows, no one uses C++, no one uses...
Keeping the apple denialism aside, one startup finds a bug in the software, The company doesn’t want to admit it (like antennaGate) and then goes about solving the problem.
It strikes me as troublesome that a company that found a bug could be banned from the App Store and the rep talks about it as killing the company.
> It strikes me as troublesome that a company that found a bug could be banned from the App Store and the rep talks about it as killing the company.
Yes, all of that would be troublesome... if it were true. Given the rest of the post's content I'm leaning pretty heavily towards "made up". This whole thing reads like "Am I the Asshole" or similar subreddits which are 99% outlets for fiction writers.
> Apple built a liquid cooling system for phones.
It's really just a heat pipe - vapour trapped inside copper, not circulating liquid to a radiator.
https://www.youtube.com/watch?v=OR8u__Hcb3k
The vapor does recondense into a liquid which is why they usually have a very rough and porous inner surface texture. But you're right that there isn't meaningful circulation, just convection.
The claims are incredible:
“at 60fps in HD resolution. In real-time. On iPhone. In 2021.”
“5ms latency”
“512 x 512 pixel resolution per video”
I don’t mean to be rude but I’m having trouble convincing myself this is a real story.
I can see why you'd be skeptical. It was pretty insane what we did.
Summary: They made an iOS app that was slow, and the iPhone got hot. This generation of iPhone is faster, and adopted a vapor chamber heat spreader. Our app runs faster and the phone doesn't get as hot. Therefore, by correlation, our app caused the new iPhone.
This is such an odd submission, and a lot of the claims are bizarre and seemingly nonsensical. Maybe I'm just misunderstanding. Exchanges in it seem remarkably...fictional. It reads like a tosser peacocking LinkedIn post by someone desperately trying to scam some rubes.
It also seems like one of those self-aggrandizing things that tries to spin everything as a reaction to themselves, instead of just technology progressing. No, vapour chamber cooling isn't some grand admission, it's something that a variety of makers have been adopting to reduce throttling as a spec-sheet item of their top end devices. It isn't all about you.
And given that the base 17 doesn't have VCC, I guess Apple isn't "admitting" it at all, no?
And the CoreML v MLX nonsense at the end is entirely nonsensical and technically ignorant. Like, wow.
No one should learn anything from this piece. The author might know what they're talking about (though I am doubtful), but this pieces was "how to make an Apple event about ourselves" and it's pretty ridiculous.
> And given that the base 17 doesn't have VCC, I guess Apple isn't "admitting" it at all, no?
It will be fun to see how hot the iPhone Air gets since it has the same chip as the 17 Pro (w/ one fewer GPU core), but a less thermally conductive metal and no vapor chamber.
I imagine it will be a lot like the MacBook Air, in that it just thermal throttles faster. It has the same chip as the Pro but will never seen the same _sustatined_ performance.
There is a difference because you can get an M4 Pro/Max and much more RAM with a MacBook Pro.
This will be one of those situations where we'll really miss Anandtech. Still can't believe that site died.
In the real world I doubt anyone will ever notice the difference, VCC or not. VCC only will materially affect usage when someone is doing an activity that will hit throttling, which is actually extraordinarily rare in normal use, and usually only comes into play in benchmarking. The overwhelming majority of time we peg those cores for a tiny amount of time and get a quick Animoji or text-extraction from an image, and so on. Even the "AI" usage on a mobile device is extremely peaky.
tl;dr This guy's AI software couldn't run on iPhones without damaging them. That's an iOS/iPhone bug. He believes Apple just put in a more advanced cooling solution into their latest phones because new AI software requires more cooling for the chips.
The whole thing is written in a bombastic storytelling style that is typical of LinkedIn threads. If this is entertaining to you, this is the link for you since it has actual image examples of their model output varying between platforms.
What an absolute ass was that Apple engineer ...
For an off the books meeting with someone who was concerned about getting banned, that seems like a lot of candor. Perfectly reasonable.
My experience is limited, but no one I've engaged with at Apple has ever admitted fault for anything. I think it's a liability/culture thing.
I hope we both get surrounded by the kind of people each of us prefer :)
I might be wrong here, but based on the context, I read the "I don't want to know about your names or your company" as CYA for both parties: "I can't be legally obligated to disclose things I don't know, so please don't tell me anything you don't want Apple to know. I can't say that outright, though, so instead I'm going to say something that could be reasonably read as 'I'm busy and don't want to hear your pitch'".
The feeling I get reading that is that you shouldn't interpret that conversation as a verbatim quotation. Even though the author did everything possible to make it seem like it was verbatim; a hunch I get reading the entire thing is that its summarized and viewed through the author's eyes.
E.g. maybe they actually said some variation of "your app is bricking iPhones? how did you get through app store review..." and the author interpreted it as "squashing his company like a bug".
He did an hour of technical discussions, and found the issue. It’s not his fault apple makes him sign an employment secrecy contract so strict he has to be that careful or lose his job.
Your bar is higher than mine.
Unlike Apple's formal "developer evangelist" and several others I contacted, the guy actually took the time to talk to us, and I was/am grateful for that. He's a cog in a very large corporate machine. Apple is Apple. He's not the CEO. He was doing his job and did me a favor. I am grateful to him.
He was a part of Apple. So by default he was an …
But he was pretty professional in the way he went about it, “no names, no company”. And “You found a security bug! Show me. But you won’t get credit”
The engineer (second convo) or the PM (first convo)?
It's obviously exaggerated for humor. Has no one ever told anyone else a story before? No one talks like this.
"Off-the-books" meetings are just friend-based connections. I got a bug in the Linux nvidia driver fixed that was affecting me by just hitting up an old friend. I could write that story as him saying "Keep this top-secret" if I wanted because that's just fun storywriting.
OP here: I can see why you'd think "this can't possibly be real," but I assure you that the story is real, not exaggerated. I was there.
So the quotes are direct and not paraphrased? Looks like I’m the one without my finger on the pulse of human interaction then.
I mean, to be fair, these events happened several years ago. My memory is as faulty as any other human being's, but as far as I remember, this is exactly what happened. These were very memorable events that I remember distinctly. It's possible my memory is distorted, but this is literally what I believe happened to the best of my ability.
Obviously, there was more to the conversation that what I wrote, but these are the actual words that I remember being said.
For more context, the PM at Apple in question was a former colleague of my then girlfriend. I reached out to her to have a friendly catch up. It wasn't positioned as meeting officially with Apple. I was literally just going through LinkedIn trying to figure out who I knew at Apple. So I hit her up on LinkedIn and asked to catch up, then told her about the situation. And this is how she responded. Worth noting: English is not her native language.
Amusing to get the Trisolaran response to a warm intro. I suppose there’s all kinds of people in the world. Unlucky there.
I can’t imagine anyone treating me like that and I’ve dealt with billionaires. Hilarious to have some L3 FAANG act like Genghis Khan.