AMD now has their own CPU flaw

SpyderTracks

We love you Ukraine
AMD CPU's level 1 cache predictor feature on chips from 2011 to 2019 have a memory leak vulnerability it would seem, known as Take-A-Way.

See https://www.engadget.com/2020/03/08/amd-cpu-take-a-way-data-leak-security-flaw/

Whitepaper at https://mlq.me/download/takeaway.pdf
It will be interesting to see their reaction to addressing it compared to Intels. I have a feeling a mitigation will be a lot less extreme in relation to performance and I have a feeling they'll be a lot more open and transparent about the underlying issue.

The hackers that found the flaw were funded by Intel directly. I'm not saying it's not an issue that needs addressing, but I think there's a lot more to uncover yet.
 

Stephen M

Author Level
Agree with Spydertracks about speed of AMD response, if the way the dealt with a couple of Linux issues on the Ryzen it will be very good. Almost as soon as I posted the Phoronix link to show the problem the issue had been sorted, well certainly less than a week and for a lot of companies anything less than a month is considered an instant response.

A few years ago AMD were absolute pants, now by hard work and good research they are streets ahead, it is a pity that Intel seem to be trying to compete by dissing their opponents rather than producing better kit. Also agree that there is a lot more to uncover but if the best Intel can do to compete is this then we are a long way off creditable competition to the Ryzen.
 

SpyderTracks

We love you Ukraine
Agree with Spydertracks about speed of AMD response, if the way the dealt with a couple of Linux issues on the Ryzen it will be very good. Almost as soon as I posted the Phoronix link to show the problem the issue had been sorted, well certainly less than a week and for a lot of companies anything less than a month is considered an instant response.

A few years ago AMD were absolute pants, now by hard work and good research they are streets ahead, it is a pity that Intel seem to be trying to compete by dissing their opponents rather than producing better kit. Also agree that there is a lot more to uncover but if the best Intel can do to compete is this then we are a long way off creditable competition to the Ryzen.
The main factor that I'll be looking at is that all Intels chips still have the flaws in built, they haven't changed the design which is what's necessary to cirumvent it. They're still happy plodding out that same 5 year old architecture not giving a damn about security.

We'll see if AMD respond differently, or if in fact it can be corrected with a Micro-code correction which is what I'm hoping.
 

SpyderTracks

We love you Ukraine
There's an update now:


It appears that so far, the research by AMD and others suggests this isn't a new flaw, just another way to penetrate an existing flaw, it suggests that a patch can be implemented with ZERO performance impact.

 

ubuysa

The BSOD Doctor
For me, the bottom line on all these vulnerabilities is that the attacking code must be running on the target computer. I don't believe any of these vulnerabilities (whether on Intel or AMD) can be exploited remotely?

That being the case the best defence against exploitation of these or future CPU flaws is to ensure that unauthorised processes cannot execute on your PC. It thus behooves us all to ensure that our antimalware defences are as robust as we can make them.
 

Stephen M

Author Level
Good point Ubuysa. Ultimately one of the best defenses is a common sense approach. I back up everything and run regular scans, Clam TK is my favourite but think that may be Linux only.
 

jerome_jm_martin

Bronze Level Poster
this security issues are a nightmare, a bit scary, but real time consuming, last year for a customer I did patch all the systems bios, servers bios, cisco switch firmware and also the qnap nas, just for qnap they prodived 12 firmware update for 2019, yeah you read it correctly : 12 ! and I am not even mentioning the security bulletins for os'es.

hopefully most of them can be applied without having a sys admin in range... and of course make backup, or snapshots !

J.
 

ubuysa

The BSOD Doctor
this security issues are a nightmare, a bit scary, but real time consuming, last year for a customer I did patch all the systems bios, servers bios, cisco switch firmware and also the qnap nas, just for qnap they prodived 12 firmware update for 2019, yeah you read it correctly : 12 ! and I am not even mentioning the security bulletins for os'es.

hopefully most of them can be applied without having a sys admin in range... and of course make backup, or snapshots !

J.
I know I'm in danger of labouring the point, but purely from a system security viewpoint (something I was responsible for when I headed up the operating system support team in a large multi-mainframe environment) I firmly believe you need to find the lowest common denominator. Of course the various vulnerabilities need to be patched, but I think that for the individual home user, and especially for the overworked sysadmins, it's absolutely essential to understand the attack vectors that can be used to exploit these various vulnerabilities and to find the lowest common denominator - the single thing you can do to stop your system being attacked.

AFAIK all these CPU-based vulnerabilities require the exploiting code to be running on the local machine - so that's the lowest common denominator. If you can ensure that unauthorised code cannot execute on your local machine then you are secured against all these types of vulnerability - including those as yet unknown. Patching the vulnerabilities themselves can then be done in a more structured way, free from the need to shut them down as fast as possible.

Deciding how to stop unauthorised code running on the local machine is also a case of finding the lowest common denominator. Attempting to detect malware via signatures is always going to be a lost cause, because signature detection will never find the zero-day exploit. Detection via heuristics is an almost impossible balance between not missing real malware and not throwing up false positives. For me, the lowest common denominator here is to not attempt to detect malware at all but to run every unknown process in a sandbox, where it has no access to any real resources (any resources it does try to use are virtualised just for that process, so it has no way of knowing that it's sandboxed).

A sandboxed system does require some initial setting up to identify and whitelist all the authorised processes, but once the system is setup and stable it will trap every unknown process - true zero-day security. You really don't care whether an unauthorised process contains malware or not, it can't do you any harm inside the sandbox, thus giving you the time to analyse it in a calm and structured way and decide whether it's safe or not.
 

jerome_jm_martin

Bronze Level Poster
I know I'm in danger of labouring the point, but purely from a system security viewpoint (something I was responsible for when I headed up the operating system support team in a large multi-mainframe environment) I firmly believe you need to find the lowest common denominator. Of course the various vulnerabilities need to be patched, but I think that for the individual home user, and especially for the overworked sysadmins, it's absolutely essential to understand the attack vectors that can be used to exploit these various vulnerabilities and to find the lowest common denominator - the single thing you can do to stop your system being attacked.

AFAIK all these CPU-based vulnerabilities require the exploiting code to be running on the local machine - so that's the lowest common denominator. If you can ensure that unauthorised code cannot execute on your local machine then you are secured against all these types of vulnerability - including those as yet unknown. Patching the vulnerabilities themselves can then be done in a more structured way, free from the need to shut them down as fast as possible.

Deciding how to stop unauthorised code running on the local machine is also a case of finding the lowest common denominator. Attempting to detect malware via signatures is always going to be a lost cause, because signature detection will never find the zero-day exploit. Detection via heuristics is an almost impossible balance between not missing real malware and not throwing up false positives. For me, the lowest common denominator here is to not attempt to detect malware at all but to run every unknown process in a sandbox, where it has no access to any real resources (any resources it does try to use are virtualised just for that process, so it has no way of knowing that it's sandboxed).

A sandboxed system does require some initial setting up to identify and whitelist all the authorised processes, but once the system is setup and stable it will trap every unknown process - true zero-day security. You really don't care whether an unauthorised process contains malware or not, it can't do you any harm inside the sandbox, thus giving you the time to analyse it in a calm and structured way and decide whether it's safe or not.

your analysis is perfectly right, but not all companies (thinking about the small ones) have the resources, or the knowledge, or the will, to implement a proper it security. Either they don't know how to do it, or worse, they don't care... until it's too late. What i'am just pointing out is it's becoming more and more complex to get an IT infrastructure 'properly' secured as there's flaws discovered, not only for processors, (and made public) everyday.

But at the end of the day, the first security step is to teach users the basics...


J.
 
Top