URLs need to be persistent and that’s not so likely when you use these services. But the ever increasing popularity of Twitter, who impose a 140 character limit on tweets, means that more and more URLs are getting shortened. The ridiculous thing is it isn’t even necessary.
…could solve at least some of these problems. It provides a service to expand short urls from many, many providers into long urls
That’s cool because:
it caches the expansion so has a persistent store of short <> long mappings. They plan to expose these mappings on the web which would also solve [reliance on 3rd party – if they go out of business links break]
Of course what would be extra cool would be if, in addition to the source code being open sourced, so was the underlying database. That way if anything happened to longurl.org someone else could resurrect the service.
All good stuff. But the really ironic thing is that none of this should be neccessary. The ‘in 140 characters or less’ thing isn’t true. As Michael points out:
if i write a tweet to the 140 limit that includes a link then <a href=”whatever”>whatever</a> will be added to the message. so whilst the visible part of the message is limited to 140 chars the message source isn’t. There’s no reason twitter couldn’t use the long url in the href whilst keeping the short url as the link text…
All Twitter really needs to do is provide their own shortening service – if you enter anything that starts “http://” it gets shortened in the visable message. Of course it doesn’t really need to actually provide a unique, hashed URL, it could convert the anchor text to “link” or the first few letters of the title of the target page while retaining the full-fat, canonical URL in the href.
Following the dot.com boom of the late 1990’s, when anyone and everybody who could code worked all hours to realise their ideas, there was a collapse and loads of people were unemployed. Prior to the collapse some people made loads of money, of course some of them then went and lost it and some people just worked long hours for no real benefit. But that’s not really the point, the point is that after the dot.com bubble burst we saw the emergence of new tech companies that layed the foundation for the whole web 2.0 thing.
During the late ’90s everyone was busy, busy, busy doing stuff for paying clients and certainly in the early days there was some genuine innovation. But there was also a lot of dross and all those client demands meant we didn’t always have the time to play with the medium and try out new ideas. But following the collapse in 2001 people were suddenly able to explore the medium, develop and innovate new ideas. That’s not to say that there weren’t economic pressures on those development teams — in many ways the pressures where more acute because there wasn’t a VC buffering your cash flow. And as one of those companies put it you needed to get real:
Getting Real is about skipping all the stuff that represents real (charts, graphs, boxes, arrows, schematics, wireframes, etc.) and actually building the real thing. […]
Getting Real delivers just what customers need and eliminates anything they don’t.
It seems that when we have unemployment among geeks we see true innovation, genuinely new ideas coming to market. During the good times when employment is high we tend to see a raft of “me-toos” commissioned by people that too often don’t really understand the medium, and aren’t motivated to come up with the new ideas instead tending to focus how to make the existing ideas bigger, better, faster because that’s lower risk and for sure this period has it’s advantages, unfortunatley it seems innovation isn’t one of them.
The current economic depression is clearly bad news, potentially very bad news indeed, but what it might mean is we’re in for another period of innovation as more and more geeks find themselves unemployed and starting setting up on their own.
However, while it is great news that Twitter will be implementing OAuth soon, they haven’t yet and there are plenty of other services that don’t use it, it’s therefore worth pausing for a moment to consider how we’ve got here and what the issues are, because while it will be great — right now — it’s a bit rubbish.
We shouldn’t assume that either Twitter or the developers responsible for the third-party apps (those requesting your credentials) are trying to do anything malicious — far from it — as Chris Messinaexplains:
The difference between run-of-the-mill phishing and password anti-pattern cases is intent. Most third parties implement the anti-pattern out of necessity, in order to provide an enhanced service. The vast majority don’t do it to be malicious or because they intend to abuse their customers — quite the contrary! However, by accepting and storing customer credentials, these third parties are putting themselves in a potentially untenable situation: servers get hacked, data leaks and sometimes companies — along with their assets — are sold off with untold consequences for the integrity — or safety — of the original customer data.
The folks at Twitter are very aware of the risks associated with their users giving out usernames and passwords. But they also have concerns about the fix:
The downside is that OAuth suffers from many of the frustrating user experience issues and phishing scenarios that OpenID does. The workflow of opening an application, being bounced to your browser, having to login to twitter.com, approving the application, and then bouncing back is going to be lost on many novice users, or used as a means to phish them. Hopefully in time users will be educated, particularly as OAuth becomes the standard way to do API authentication.
Another downside is that OAuth is a hassle for developers. BasicAuth couldn’t be simpler (heck, it’s got “basic” in the name). OAuth requires a new set of tools. Those tools are currently semi-mature, but again, with time I’m confident they’ll improve. In the meantime, OAuth will greatly increase the barrier to entry for the Twitter API, something I’m not thrilled about.
It also doesn’t change the fact that someone could sell OAuth tokens, although OAuth makes it easier to revoke credentials for a single application or site, rather than changing your password, which revokes credentials to all applications.
Digital identities exist to enable human experiences online and if you store someone’s Identity you have a relationship. So when you force third party apps into collecting usernames, passwords (and any other snippet of someone’s Identity) it forces those users into having a relationship with that company — whether the individual or the company wants it. If you store someones identity you have a relationship with them.
With technology we tend not to enable trust in the way most people use the term. Trust is based on relationships. In close relationships we make frequent, accurate observations that lead to a better understanding and close relationships, this process however, requires investment and commitment. That said a useful, good relationship provides value for all parties. Jamie Lewis has suggested that there are three types of relationship (on the web):
Custodial Identities — identities are directly maintained by an organisation and a person has a direct relationship with the organisation;
Contextual Identities — third parties are allowed to use some parts of an identity for certain purposes;
Transactional Identities — credentials are passed for a limited time for a specific purpose to a third party.
Of course there are also some parts to identity which are shared and not wholly owned by any one party.
This mirrors how real world identities work. Our banks, employers and governments maintain custodial identities; whereas a pub, validating your age before serving alcohol need only have the yes/no question answered — are you over 18?
Twitter acts as a custodian for part of my online identity and I don’t want third party applications that use the Twitter API to also act as custodians but the lack of OAuth support means that whether I or they like it they have to. They should only have my transactional identity. Forcing them to hold a custodial identity places both parties (me and the service using the Twitter API) at risk and places unnecessary costs on the third party service (whether they realise it or not!).
But, if I’m honest, I don’t really want Twitter to act as Custodian for my Identity either — I would rather they held my Contextual Identity and my OpenID provider provided the Custodial Identity. That way I can pick a provider I trust to provide a secure identity service and then authorise Twitter to use part of my identity for a specific purpose, in this case micro-blogging. Services using the Twitter API then either use a transactional identity or reuse the contextual identity. I can then control my online identity, those organisations that have invested in appropriate security can provide Custodial Identity services and an ecosystem of services can be built on top of that.
Just wanted to correct a couple of mistakes, as pointed out by Chris, below:
1. Twitter was hacked with a dictionary attack against an admin’s account. Not from phishing, and not from a third-party’s database with Twitter credentials.
2. The phishing scam worked because it tricked people into thinking that they received a real email from Twitter.
Neither OpenID nor OAuth would have prevented this (although that not to say Twitter shouldn’t implement OAuth). Sorry about that.