Saltar al contenido

When is a cyber security hole not a hole? Not

When is a cyber security hole not a hole?  Not

In cybersecurity, one of the most challenging issues is deciding when a security hole is a big problem, needs an immediate solution or a solution, and when it is trivial enough to ignore or at least prioritize. The hard part is that much of this involves the dreaded security of obscurity, where a vulnerability is left in place and those who know hope that no one will find it. (Classic example: leaving a sensitive web page unprotected, but hoping that a very long, non-intuitive URL won’t be accidentally found.)

And then there is the real problem: in the hands of an inventive and creative bad boy, almost any hole can be exploited in non-traditional ways. But, there is always one thing, but in cybersecurity, IT and security professionals cannot pragmatically fix all the holes in any part of the environment.

Like I said, it’s complicated.

What comes to mind is a fascinating hole in the M1 processor found by developer Hector Martin, who named it M1racles and posted detailed thoughts about it.

Martin describes it as “a design flaw in the Apple Silicon M1 chip. [que] Allows two applications running under one operating system to secretly exchange data without using memory, sockets, files, or any other normal operating system feature. This works between processes running as different users and under different privilege levels, creating a hidden channel for secret data exchange. The vulnerability is integrated into Apple Silicon chips and cannot be remedied without a new silicon patch. «

Martin added: «The only mitigation available to users is to run their entire operating system as a virtual machine. Yes, running their entire operating system as a virtual machine has an impact on performance» and then suggested that users should not have to do this because of the impact of performance.

Here things get interesting. Martin argues that, in practice, this is not a problem.

«Indeed, no one will find a serious use for this defect in practical circumstances. In addition, there are already one million side channels that you can use for cooperative inter-process communication (e.g., cache stuff) on all systems. channels cannot leak data from uncooperated applications or systems. In fact, it repeats itself: hidden channels are completely useless, unless your system is already compromised. «

Martin initially said that this defect could be slightly mitigated, but he changed his mind. «Initially I thought that journaling is done on the core. If it was, then I could delete it in context switches. But since it’s on the group, unfortunately we’re a bit mistaken, because it can make communications between cores without entering To run EL1 / 0 with TGE = 0, i.e. inside a guest VM, there is no known way to block it. «

We have no further information as to whether Apple intends to implement these controls or whether they have already done so, but they are aware of the potential issue and it would be reasonable to expect them to do so. It is even possible that the existing automated scan will already reject any attempt to use direct system logs. «

This is where I’m worried. The security mechanism here is to trust the people in the Apple App Store to detect an app trying to exploit it. Really? Neither Apple nor Google’s Android have the resources to properly verify every request submitted. If it looks good at a glance, an area in which bad professionals excel, both mobile giants will approve.

In an otherwise excellent article, Ars Technica said: “The hidden channel could bypass this protection by switching the keystrokes to another malicious application, which in turn would send it to the Internet. Even then, the chances of two apps going through Apple’s review process and then being installed on a target’s device are unlikely. «

Reduced? Really? It is assumed that IT is confident that this hole will not hurt, because the chances are against an attacker to successfully exploit it, which in turn is based on the fact that the Apple team detects any problematic applications. It’s a pretty scary logic.

This brings us back to my original point. What is the best way to deal with holes that require a lot of work and luck to be a problem? Given that no company has the resources to properly address every hole in the system, what should an overworked and understaffed CISO team do?

However, it is refreshing for a developer to find a hole and then minimize it, as if it is not a big problem. But now that the hole has been made public in impressive detail, my money is on a cyber thief or a ransomware extortionist who knows how to use it. I would give them less than a month to take advantage of it.

Apple needs to push to fix this as soon as possible.