|Main Archive Page > Month Archives > full-disclosure-uk archives|
Since openid is essentially a url resolver security design concept (assuming its not a native xri resolver case), openid2 does feels a bit more exposed to the consequences of particular construction they laid out. The whole mapping of "identity" via http - the heart of the openid concept -requires secure discovery. Because that is hard, the openid form of websso is interesting!
Surpised they didn't draw conclusions on hxri modes of xri resolution in openid2, which relies on https (as a resolver, not just a way of invoking a secure channel). An academic completeness oversight, really: failure to define the limits of the logic.
All Good news for openid2 tho - that reseach/academic folk *want* the pr/association with it (while hopefully addressing the usual prng-based flaws usually found in these kind of grassroots-based crypto communities)!
Security Advisory (08-AUG-2008) (CVE-2008-3280)
Ben Laurie of Google's Applied Security team, while working with an external researcher, Dr. Richard Clayton of the Computer Laboratory, Cambridge University, found that various OpenID Providers (OPs) had TLS Server Certificates that used weak keys, as a result of the Debian Predictable Random Number Generator (CVE-2008-0166).
In combination with the DNS Cache Poisoning issue (CVE-2008-1447) and the fact that almost all SSL/TLS implementations do not consult CRLs (currently an untracked issue), this means that it is impossible to rely on these OPs.
This affects any web site and service provider of various natures. It's not exclusive for OpenID nor for any other protocol / standard / service! It may affect an OpenID provider if it uses a compromised key in combination with unpatched DNS servers. I don't understand why OpenID is singled out, since it can potentially affect any web site including Google's various services (if Google would have used Debian systems to create their private keys).