Urban boyscout.Tech-whisperer. Tech-skeptic.
620 stories
·
8 followers

Face ID hasn't been hacked: What you need to know

1 Share

Absolutely no one has broken into Face ID's secure enclave or gained access to its data. All we've seen is headlines and videos involving family members and forensic artists trying to spoof it.

Face ID, Apple's facial identity sensor for iPhone X, is new and that's both scary and ripe for exploitation. We saw it happen with Touch ID, from all the concern that manifested when Apple announced it alongside iPhone 5s to the sensationalized headlines and the attempts to spoof it after it launched. Now, we're seeing the same thing with Face ID — fear, uncertainty, and doubt spread before it was even released and spoof attempts are following in a post-video-first, think-through-the-logic-flow second frenzy.

It's a shame. Face ID is incredibly enabling and accessible technology that can all but eliminate active authentication for users and allow them to unlock and use their iPhones more simply and easily than ever before. But those same people, the ones who could benefit the most, are being assaulted by an endless stream of headlines that are, bluntly, worse attacks than many of the so-called exploits they claim to be reporting.

I know this because every time one of those headlines goes live, I get calls and messages from my family members who are suddenly panicked by them. And they don't deserve that. Nobody does.

Face ID facts

Before Face ID was released alongside iPhone X, Apple published a white paper covering its implementation and current limitations. The company followed up with a support article.

I summed them all up, and some logical extensions, in my iPhone X review:

  • Face ID, as currently implemented, does not work in landscape orientation. (The camera system is optimized for portrait.)

  • Face ID needs to be able to see your eyes, nose, and mouth to be able to function. If too much of that area is blocked by IR filters (like some sunglasses) or other objects (like masks), there's not enough of your face to ID. (This is like the gloved finger with Touch ID.)

  • Direct sunlight on the Face ID camera can blind it, just like any camera. If you're standing with the sun directly over your shoulder, turn a bit before using Face ID. (This is like the moist finger with Touch ID.)

  • If you're under the age of 13, your facial features may not yet be distinct enough for Face ID to function properly and you'll have to revert to passcode.
  • Face ID can't effectively distinguish between identical twins (or triplets, etc.) If you have an identical sibling or even similar looking family member, and you want to keep them out of your iPhone X, you'll have to revert to passcode.

  • If you give someone else your passcode, they can either delete and re-setup themselves on Face ID or, if they look similar to you, enter the passcode repeatedly at failure to retrain Face ID to recognize their features as well/instead.

  • Unlike Touch ID, which allows for the registration of up to 5 fingers, Face ID currently only allows for one face. That means no sharing easy access with family members, friends, or colleagues.

  • If, for any reason, you don't like the idea of your face being scanned, you'll have to revert to passcode or stick with a Touch ID device.

There doesn't seem to be anything shown off in video or breathless headline since that doesn't fall under any of these limitations.

Hack vs. spoof

One of the most egregious errors in reporting that's gone on around Face ID also echoes those we saw years ago with Touch ID: The conflation of hacking with spoofing.

When people hear or read the word "hack", it's easy to imagine someone got into the system. In this case, the secure enclave on Apple's A11 Bionic chipset that houses the neural networks for Face ID and its data.

That absolutely has not happened. For both Face ID and Touch ID, the secure enclave remains inviolate. (That's very different from early HTC and Samsung implementations, which stored fingerprint data in world-readable directories...)

What we have seen is people try to spoof it or fool it into thinking its capturing legitimate biometric data. We saw this with Touch ID as well. We saw fingerprints being lifted and reproduced for the express purpose of fooling the sensor system. Even before biometrics, we saw this with traditional keys. People would scan and reproduce keys to get into door locks. It's exactly the type of attack you try against physical security systems.

Now we're seeing the same thing with family members, masks, and. Face ID.

Family Face ID feuds

Earlier this month, we saw two brothers post a video claiming one could unlock the Face ID system of the other. I covered it at the time:

One of the videos that got a lot of attention this weekend was made by two brothers, both of whom were eventually able to get Face ID to unlock the same iPhone X. It was revealed in a follow-up video that the first brother set up Face ID, then the second brother then tried to use it and was properly locked out. Then the second brother entered the iPhone X passcode to unlock.

If someone else, including your sibling, has your iPhone X passcode, Face ID doesn't even exist. You've given them much higher access than even Face ID allows — including the ability to reset Face ID and other data on your iPhone X — and, literally, nothing else matters at that point. Keys to the castle. Time to go home.

But for Face ID in particular, there's some interesting behavior that's worth being reminded about: The neural networks that power Face ID are designed to learn and continue to match your face as you change your appearance over time. If you shave your mustache and/or beard, if you change your glasses and/or hairstyle, if you add or remove any makeup and/or facial decorations, as you put on or take off hats and/or scarves.

In the video, the second brother wasn't fooling or tricking Face ID in any way. By entering the Passcode was training it, as designed, to learn his face. By entering the Passcode multiple times, the second brother was literally telling Face ID to add his facial data to the first brother's.

More recently, we've seen younger siblings or children unlock the Face ID systems of older siblings or parents. In those cases, Passcode could also be used to train Face ID so it thinks the similar face is a new state of the registered face. In other words, it's introducing fuzziness into the system.

Even in cases where Passcode isn't being used to train a similar face, they're running into two of Apple's previously disclosed limitations:

  • If you're under the age of 13, your facial features may not yet be distinct enough for Face ID to function properly and you'll have to revert to passcode.

  • Face ID can't effectively distinguish between identical twins (or triplets, etc.) If you have an identical sibling or even similar looking family member, and you want to keep them out of your iPhone X, you'll have to revert to passcode.

If the facial geometry is the same and the relative is young enough that that they lack distinct facial features of their own, the chance for spoofing increases.

Mask confusion

Most recently, a Vietnamese security firm was given headlines when it claimed Face ID was successfully spoofed by dummy face. Similar to how the two brothers initially showed what looked like an immediate unlock but was subsequently disclosed as Passcode-enabled training, there turned out to be more to the mask-attack than the video first showed.

From Reuters:

Ngo Tuan Anh, Bkav's vice president, gave Reuters several demonstrations, first unlocking the phone with his face and then by using the mask. It appeared to work each time.

However, he declined to register a user ID and the mask on the phone from scratch because, he said, the iPhone and mask need to be placed at very specific angles, and the mask to be refined, a process he said could take up to nine hours.

Machine Learning learns

People can shoulder-surf you (spy by looking over your shoulder) to learn your passcode. If you fall asleep they could put your finger on Touch ID. If they're a close family member or twin, they may be able to fool Face ID.

Those first two attacks are against static targets. Passcode never gets harder to spy. Touch ID is a simple data comparison. Face ID, on the other, hand learns.

Right now that learning is being tested and, in some cases, it's letting in look-almost-alikes that it should keep out. But Apple designed not only the current neural networks to adapt over time, Apple designed them to be replaceable with better neural networks over time.

From my Face ID Explainer:

Face ID keeps the original enrollment images of your face (but crops them as tightly as possible so as not to store background information). The reason for this is convenience. Apple wants to be able to update the neural network trained for Face ID without you having to re-register your face. This way, if and when the neural networks are updated, the system will automatically retrain them using the images stored in the same region of the secure enclave.

With Face ID, we don't have to wait for new hardware for it to improve. Apple can and undoubtely will improve any and every time the neural networks get updated.

Choose your own unlocks

With similar-looking relatives, concerns over false positives and unintended or unwanted access are absolutely legitimate. It can be mitigated by switching to a passcode, but Face ID is so convenient many will want to use it anyway. In those cases, it's important to remember that Face ID isn't binary. You can turn it on or off but you can also choose what Face ID can unlock even when it's on.

You can individually enable or disable Face ID for:

  • iPhone unlock
  • Apple Pay
  • iTunes and App Store
  • Safari AutoFill
  • Other Apps (on an app-by-app basis)

So, if you're worried about your sibling or child unlocking your iPhone, you can turn off Face ID for that but leave it on for everything once you unlock your iPhone with Passcode. You could also leave Face ID for unlock, but turn it off for purchases if you're worried about those.

Yes, all of those introduce inconveniences, but they let you pick your own inconveniences. And if any of them are a real deal breaker, Apple also offers iPhone 8 with Touch ID, and Passcode and Password options for every iPhone.

Face to Face ID

When you tap a password manager or banking app and you watch it unlock, or you go to a website and your login suddenly fills before your eyes, it makes you forget passwords and passcodes exist. Convenience, though, is perpetually at war with security.

Face ID, like Touch ID and all biometrics, is about convenience. and identity. If you're truly concerned with security, you'll want to use a long, strong, unique password. But that's not tenable for most people. So that convenience and identity becomes vitally important.

And despite all the FUD and frantic headlines, Face ID delivers that. And, in most cases, in a far better, more transparent way than any authentication system before it.

So, by all means be informed. Read and watch everything you can. But don't let anyone scare you just so they can get views or make headlines. Try it out and decide for yourself.

Read the whole story
chrisrosa
5 days ago
reply
San Francisco, CA
Share this story
Delete

Invermark House

1 Comment
Set below Table Mountain in Cape Town, the Invermark House bears the influence of two iconic homes. Architect Gilbert Colyn designed the home for himself in the late '60s, drawing...

Visit Uncrate for the full post.
Read the whole story
chrisrosa
11 days ago
reply
sign me up!
San Francisco, CA
Share this story
Delete

LavaRand in Production: The Nitty-Gritty Technical Details

1 Comment

Introduction

LavaRand in Production: The Nitty-Gritty Technical Details

LavaRand in Production: The Nitty-Gritty Technical Details

Lava lamps in the Cloudflare lobby

Courtesy of @mahtin

As some of you may know, there's a wall of lava lamps in the lobby of our San Francisco office that we use for cryptography. In this post, we’re going to explore how that works in technical detail. This post assumes a technical background. For a higher-level discussion that requires no technical background, see Randomness 101: LavaRand in Production.

Background

As we’ve discussed in the past, cryptography relies on the ability to generate random numbers that are both unpredictable and kept secret from any adversary. In this post, we’re going to go into fairly deep technical detail, so there is some background that we’ll need to ensure that everybody is on the same page.

True Randomness vs Pseudorandomness

In cryptography, the term random means unpredictable. That is, a process for generating random bits is secure if an attacker is unable to predict the next bit with greater than 50% accuracy (in other words, no better than random chance).

We can obtain randomness that is unpredictable using one of two approaches. The first produces true randomness, while the second produces pseudorandomness.

True randomness is any information learned through the measurement of a physical process. Its unpredictability relies either on the inherent unpredictability of the physical process being measured (e.g., the unpredictability of radioactive decay), or on the inaccuracy inherent in taking precise physical measurements (e.g., the inaccuracy of the least significant digits of some physical measurement such as the measurement of a CPU’s temperature or the timing of keystrokes on a keyboard). Random values obtained in this manner are unpredictable even to the person measuring them (the person performing the measurement can’t predict what the value will be before they have performed the measurement), and thus are just as unpredictable to an external attacker. All randomness used in cryptographic algorithms begins life as true randomness obtained through physical measurements.

However, obtaining true random values is usually expensive and slow, so using them directly in cryptographic algorithms is impractical. Instead, we use pseudorandomness. Pseudorandomness is generated through the use of a deterministic algorithm that takes as input some other random value called a seed and produces a larger amount of random output (these algorithms are called cryptographically secure pseudorandom number generators, or CSPRNGs). A CSPRNG has two key properties: First, if an attacker is unable to predict the value of the seed, then that attacker will be similarly unable to predict the output of the CSPRNG (and even if the attacker is shown the output up to a certain point - say the first 10 bits - the rest of the output - bits 11, 12, etc - will still be completely unpredictable). Second, since the algorithm is deterministic, running the algorithm twice with the same seed as input will produce identical output.

The CSPRNGs used in modern cryptography are both very fast and also capable of securely producing an effectively infinite amount of output1 given a relatively small seed (on the order of a few hundred bits). Thus, in order to efficiently generate a lot of secure randomness, true randomness is obtained from some physical process (this is slow), and fed into a CSPRNG which in turn produces as much randomness as is required by the application (this is fast). In this way, randomness can be obtained which is both secure (since it comes from a truly random source that cannot be predicted by an attacker) and cheap (since a CSPRNG is used to turn the truly random seed into a much larger stream of pseudorandom output).

Running Out of Randomness

A common misconception is that a CSPRNG, if used for long enough, can “run out” of randomness. This is an understandable belief since, as we’ll discuss in the next section, operating systems often re-seed their CSPRNGs with new randomness to hedge against attackers discovering internal state, broken CSPRNGs, and other maladies.

But if an algorithm is a true CSPRNG in the technical sense, then the only way for it to run out of randomness is for somebody to consume far more values from it than could ever be consumed in practice (think consuming values from a CSPRNG as fast as possible for thousands of years or more).2

However, none of the fast CSPRNGs that we use in practice are proven to be true CSPRNGs. They’re just strongly believed to be true CSPRNGs, or something close to it. They’ve withstood the test of academic analysis, years of being used in production, attacks by resourced adversaries, and so on. But that doesn’t mean that they are without flaws. For example, SHA-1, long considered to be a cryptographically-secure collision-resistant hash function (a building block that can be used to construct a CSPRNG) was eventually discovered to be insecure. Today, it can be broken for $110,000’s worth of cloud computing resources.3

Thus, even though we aren’t concerned with running out of randomness in a true CSPRNG, we also aren’t sure that what we’re using in practice are true CSPRNGs. As a result, to hedge against the possibility that an attacker has figured out how to break our CSPRNGs, designers of cryptographic systems often choose to re-seed CSPRNGs with fresh, newly-acquired true randomness just in case.

Randomness in the Operating System

In most computer systems, one of the responsibilities of the operating system is to provide cryptographically-secure pseudorandomness for use in various security applications. Since the operating system cannot know ahead of time which applications will require pseudorandomness (or how much they will require), most systems simply keep an entropy pool4 - a collection of randomness that is believed to be secure - that is used to seed a CSPRNG (e.g., /dev/urandom on Linux) which serves requests for randomness. The system then takes on the responsibility of not only seeding this entropy pool when the system first boots, but also of periodically updating the pool (and re-seeding the CSPRNG) with new randomness from whatever sources of true randomness are available to the system in order to hedge against broken CSPRNGs or attackers having compromised the entropy pool through other non-cryptographic attacks.

For brevity, and since Cloudflare’s production system’s run Linux, we will refer to the system’s pseudorandomness provider simply as /dev/urandom, although note that everything in this discussion is true of other operating systems as well.

Given this setup of an entropy pool and CSPRNG, there are a few situations that could compromise the security of /dev/urandom:

  • The sources of true randomness used to seed the entropy pool could be too predictable, allowing an attacker to guess the values obtained from these sources, and thus to predict the output of /dev/urandom.
  • An attacker could have access to the sources of true randomness, thus being able to observe their values and thus predict the output of /dev/urandom.
  • An attacker could have the ability to modify the sources of true randomness, thus being able to influence the values obtained from these sources and thus predict the output of /dev/urandom.

Randomness Mixing

A common approach to addressing these security issues is to mix multiple sources of randomness together in the system’s entropy pool, the idea being that so long as some of the sources remain uncompromised, the system remains secure. For example, if sources X, Y, and Z, when queried for random outputs, provide values x, y, and z, we might seed our entropy pool with H(x, y, z), where H is a cryptographically-secure collision-resistant hash function. Even if we assume that two of these sources - say, X and Y - are malicious, so long as the attackers in control of them are not able to observe Z’s output,5 then no matter what values of x and y they produce, H(x, y, z) will still be unpredictable to them.

LavaRand

LavaRand in Production: The Nitty-Gritty Technical Details

The view from the camera

While the probability is obviously very low that somebody will manage to predict or modify the output of the entropy sources on our production machines, it would be irresponsible of us to pretend that it is impossible. Similarly, while cryptographic attacks against state-of-the-art CSPRNGs are rare, they do occasionally happen. It’s important that we hedge against these possibilities by adding extra layers of defense.

That’s where LavaRand comes in.

In short, LavaRand is a system that provides an additional entropy source to our production machines. In the lobby of our San Francisco office, we have a wall of lava lamps (pictured above). A video feed of this wall is used to generate entropy that is made available to our production fleet.

The flow of the “lava” in a lava lamp is very unpredictable,6 and so the entropy in those lamps is incredibly high. Even if we conservatively assume that the camera has a resolution of 100x100 pixels (of course it’s actually much higher) and that an attacker can guess the value of any pixel of that image to within one bit of precision (e.g., they know that a particular pixel has a red value of either 123 or 124, but they aren’t sure which it is), then the total amount of entropy produced by the image is 100x100x3 = 30,000 bits (the x3 is because each pixel comprises three values - a red, a green, and a blue channel). This is orders of magnitude more entropy than we need.

Design

LavaRand in Production: The Nitty-Gritty Technical Details

The flow of entropy in LavaRand

The overall design of the LavaRand system is pictured above. The flow of entropy can be broken down into the following steps:
The wall of lava lamps in the office lobby provides a source of true entropy.
In the lobby, a camera is pointed at the wall. It obtains entropy from both the visual input from the lava lamps and also from random noise in the individual photoreceptors.
In the office, there’s a server which connects to the camera. The server has its own entropy system, and the output of that entropy system is mixed with the entropy from the camera to produce a new entropy feed.
In one of our production data centers, there’s a service which connects to the server in the office and consumes its entropy feed. That service combines this entropy feed with output from its own local entropy system to produce yet another entropy feed. This feed is made available for any production service to consume.

Security of the LavaRand Service

We might conceive of a number of attacks that could be leveraged against this system:

  • An attacker could train a camera on the wall of lava lamps, attempting to reproduce the image captured by our camera.
  • An attacker could reduce the entropy from the wall of lava lamps by turning off the power to the lamps, shining a bright light at the camera, placing a lens cap on the camera, or any number of other physical attacks.
  • An attacker able to compromise the camera could exfiltrate or modify the feed of frames from the camera, replicating or controlling the entropy source used by the server in the office.
  • An attacker with code running on the office server could observe or modify the output of the entropy feed generated by that server.
  • An attacker with code running in the production service could observe or modify the output of the entropy feed generated by that service.

Only one of these attacks would be fatal if successfully carried out: running code on the production service which produces the final entropy feed. In every other case, the malicious entropy feed controlled by the attacker is mixed with a non-malicious feed that the attacker can neither observe nor modify.7 As we discussed in a previous section, as long as the attacker is unable to predict the output of these non-malicious feeds, they will be unable to predict the output of the entropy feed generated by mixing their malicious feed with the non-malicious feed.

Using LavaRand

Having a secure entropy source is only half of the story - the other half is actually using it!

The goal of LavaRand is to ensure that our production machines have access to secure randomness even if their local entropy sources are compromised. Just after boot, each of our production machines contacts LavaRand over TLS to obtain a fixed-size chunk of fresh entropy called a “beacon.” It mixes this beacon into the entropy system (on Linux, by writing the beacon to /dev/random). After this point, in order to predict or control the output of /dev/urandom, an attacker would need to compromise both the machine’s local entropy sources and the LavaRand beacon.

Bootstrapping TLS

Unfortunately, the reality isn’t quite that simple. We’ve gotten ourselves into something of a chicken-and-egg problem here: we’re trying to hedge against bad entropy from our local entropy sources, so we have to assume those might be compromised. But TLS, like many cryptographic protocols, requires secure entropy in order to operate. And we require TLS to request a LavaRand beacon. So in order to ensure secure entropy, we have to have secure entropy…

We solve this problem by introducing a second special-purpose CSPRNG, and seeding it in a very particular way. Every machine in Cloudflare’s production fleet has its own permanent store of secrets that it uses just after boot to prove its identity to the rest of the fleet in order to bootstrap the rest of the boot process. We piggyback on that system by storing an extra random seed - unique for each machine - that we use for that first TLS connection to LavaRand.

There’s a simple but very useful result from cryptography theory that says that an HMAC - a hash-based message authentication code - when combined with a random, unpredictable seed, behaves (from the perspective of an attacker) like a random oracle. That’s a lot of crypto jargon, but it basically means that if you have a secret, randomly-generated seed, s, then an attacker will be completely unable to guess the output of HMAC(s, x) regardless of the value of x - even if x is completely predictable! Thus, you can use HMAC(s, x) as the seed to a CSPRNG, and the output of the CSPRNG will be unpredictable. Note, though, that if you need to do this multiple times, you will have to pick different values for x! Remember that while CSPRNGs are secure if used with unpredictable seeds, they’re also deterministic. Thus, if the same value is used for x more than once, then the CSPRNG will end up producing the same stream of “random” values more than once, which in cryptography is often very insecure!

This means that we can combine those unique, secret seeds that we store on each machine with an HMAC and produce a secure random value. We use the current time with nanosecond precision as the input to ensure that the same value is never used twice on the same machine. We use the resulting value to seed a CSPRNG, and we use that CSPRNG for the TLS connection to LavaRand. That way, even if the system’s entropy sources are compromised, we’ll still be able to make a secure connection to LavaRand, obtain a new, secure beacon, and bootstrap the system’s entropy back to a secure state!

Conclusion

Hopefully we’ll never need LavaRand. Hopefully, the primary entropy sources used by our production machines will remain secure, and LavaRand will serve little purpose beyond adding some flair to our office. But if it turns out that we’re wrong, and that our randomness sources in production are actually flawed, then hopefully LavaRand will be our hedge, making it just a little bit harder to hack Cloudflare.


  1. Some CSPRNGs exist with constraints on how much output can be consumed securely, but those are not the sort that we are concerned with in this post.
  2. Section 3.1, Recommendations for Randomness in the Operating System
  3. The first collision for full SHA-1
  4. “Entropy” and “randomness” are synonyms in cryptography - the former is the more technical term.
  5. If the attacker controls X and Y and can also observe the output of Z, then the attacker can still partially influence the output of H(x, y, z). See here for a discussion of possible attacks.
  6. Noll, L.C. and Mende, R.G. and Sisodiya, S., Method for seeding a pseudo-random number generator with a cryptographic hash of a digitization of a chaotic system
  7. A surprising example of the effectiveness of entropy is the mixing of the image captured by the camera with the random noise in the camera’s photoreceptors. If we assume that every pixel captured is either recorded as the “true” value or is instead recorded as one value higher than the true value (50% probability for each), then even if the input image can be reproduced by an attacker with perfect accuracy, the camera still provides one bit of entropy for each pixel channel. As discussed before, even for a 100x100 pixel camera, that’s 30,000 bits!
Read the whole story
chrisrosa
13 days ago
reply
Ummmm... OK.
San Francisco, CA
Share this story
Delete

Codename Colossus Schnauzer

1 Comment

Codename Colossus Schnauzer

Price: $290+  | Pledge | Link

The latest mobile fortress from Michael Sng’s one-of-a-kind dieselpunk toy line. The Schnauzer Armoured Walker is one of the imperial German machines in the Codename Colossus universe. It stands 8.5″ tall and is painted and weathered by hand.

Read the whole story
chrisrosa
15 days ago
reply
beautiful
San Francisco, CA
Share this story
Delete

Amazon Key

1 Comment

Do not want.

Read the whole story
chrisrosa
25 days ago
reply
LOL...exactly.
San Francisco, CA
Share this story
Delete

Apple's Mac Mini is Now Three Years Old, No Refresh Date in Sight

1 Comment
Today marks the third anniversary of the last update of the Mac mini, Apple's most affordable and compact desktop computer. The Mac mini was refreshed on October 16, 2014, and since then, the machine has seen no additional updates.

The Mac mini is positioned as a "bring your own" machine that comes without a mouse, keyboard, or display, and the current version is still running Haswell processors and integrated Intel HD 5000/Intel Iris Graphics.

Pricing on the Mac mini starts at $499 for the entry-level base configuration, making it far more affordable than the iMac, which starts at $1,099 for a non-4K 21-inch version.


With the 2014 refresh, fans were disappointed as Apple ceased offering a quad-core processor option and support for dual hard drives, features that have not returned.

At this point, it's not clear if and when Apple will introduce a new version of the Mac mini. Prior to the 2014 refresh, the Mac mini was updated in 2006, 2007, 2009, 2010, 2011, and 2012, so it's never before gone three years sans update.

Many Apple customers are eagerly awaiting a new Mac mini, including businesses that rely on the machine, like Brian Stucki's MacStadium.

When Apple announced plans for a modular Mac Pro, Apple marketing chief Phil Schiller said the Mac mini "is an important product" in the company's lineup, suggesting Apple doesn't have plans to abandon the machine. He declined to offer up any information on a potential refresh, though.

Aside from a single rumor from Pike's Universum hinting at a new high-end Mac mini with a redesign that "won't be so mini anymore," we've heard no details at all about work on a possible Mac mini refresh.

If a new Mac mini is in the works, though, it could use either Seventh-Generation Kaby Lake chips or Eighth-Generation Kaby Lake Refresh chips, both of which are available now.

The Mac mini typically uses the same 15W U-series chips that are found in the 13-inch MacBook Pro. With Intel's Eighth-Generation chips, the U-series all feature four cores, so should a future Mac mini adopt Kaby Lake Refresh chips or later, quad-core performance will return.

Given that it's October and there are no rumors, it's not likely we're going to see a Mac mini refresh in 2017, but sometime in 2018 is fair game.

Related Roundup: Mac mini
Buyer's Guide: Mac Mini (Don't Buy)

Discuss this article in our forums

Read the whole story
chrisrosa
34 days ago
reply
grumble
San Francisco, CA
Share this story
Delete
Next Page of Stories