StewartR
Suspended / Banned
- Messages
- 11,513
- Name
- Stewart
- Edit My Images
- Yes
Just trying to get a sense of perspective here...
I understand how the Heartbleed bug works, in conceptual terms. Send the appropriate command to a web server running OpenSSL, and in return it will send you up to 64k of data from its memory. I also understand how that can theoretically be very bad. If the 64k of data which has been sent to the attacker contains your password, and if you use that same password elsewhere, then you could be very vulnerable.
I'm sure it's monumentally stupid from a software engineering point of view. I would have expected that any kind of rudimentary quality control process would have picked it up. A few years ago I used to manage a software engineering team and I would have been appalled if something like this had slipped through.
But how bad is it in the real world? I mean, that 64k of data which gets sent to the attacker is just a random chunk of the server's memory. It's not like a database with a defined structure which you can use to interpret the data. If there are usernames and passwors in there, they're not going to be identified as such. And if you send the command multiple times you'll get back multiple random chunks of memory which may or may not have anything in common and may or may not have changed between snapshots, so correlating the 'take' from multiple attacks would be very very difficult, if not harder.
So whilst it's theoretically bad, I can't see how it's actually that bad in practical terms. Am I being naive?
I understand how the Heartbleed bug works, in conceptual terms. Send the appropriate command to a web server running OpenSSL, and in return it will send you up to 64k of data from its memory. I also understand how that can theoretically be very bad. If the 64k of data which has been sent to the attacker contains your password, and if you use that same password elsewhere, then you could be very vulnerable.
I'm sure it's monumentally stupid from a software engineering point of view. I would have expected that any kind of rudimentary quality control process would have picked it up. A few years ago I used to manage a software engineering team and I would have been appalled if something like this had slipped through.
But how bad is it in the real world? I mean, that 64k of data which gets sent to the attacker is just a random chunk of the server's memory. It's not like a database with a defined structure which you can use to interpret the data. If there are usernames and passwors in there, they're not going to be identified as such. And if you send the command multiple times you'll get back multiple random chunks of memory which may or may not have anything in common and may or may not have changed between snapshots, so correlating the 'take' from multiple attacks would be very very difficult, if not harder.
So whilst it's theoretically bad, I can't see how it's actually that bad in practical terms. Am I being naive?