Thursday, December 3rd, 2009

The End of Days for “View Source”?

Category: Editorial, Performance

<p>Apocalypse 2012

“View Source is your friend”, we’ve learned countless times as web developers. It’s something special about web development that we can seamlessly lift the covers on anything we see and find out how the sausage is made. And it gets even better with great tools to interrogate the system in real-time. This capability has helped us evolve practices and patterns, and contributed to the production of many a fine browser extension and Greasemonkey script.

Our friend might sadly be going the way of the blink tag. View Source has always worked because the standard development model is to put up some static Javascript files on a server somewhere and serve them out. That model is changing though; performance is a very hot topic right now, and View Source is playing victim to that trend.

Google’s Let’s Make the Web Faster initiative is a case in point. Here is a multi-pronged attack on the performance issue, involving new protocols (SPDY), tools (PageSpeed), browser improvements (Chrome), on-demand loading (Closure), and – most pertinent – compression techniques (Closure again). And we ain’t seen nothing yet; there’s every reason to believe Google will soon be putting its money where its mouth is, by rewarding faster sites with higher rankings. (I guess I was alone in assuming they always did that.) While performance should always be a consideration for site owners, a dangling SEO carrot would no doubt convert the best of intentions into the most concrete of actions.

Site owners can’t (much) control factors such as browser choice and browser efficiency, but they can get their own performance-fu in order, and code compression is low-hanging fruit. Looking at the top 20-ranked sites, filtering only English-language sites, I found the very top guys (Google, Facebook, Yahoo, YouTube, Windows Live) predominately using Javascript compression, with the others not using it much, if at all. I expect most of them to be using it in the next 12-24 months.

In addition to compression, there is also obfuscation. With Javascript being used for more complex tasks and replacing desktop functionality, more companies will be wondering about all that intellectual property sitting in plain view. (And let’s not mention the security-by-obscurity fans, who will also go this route, however flawed their thinking.)

Is it all bad? No. There’s a much healthier respect for plain old semantic HTML these days, which means HTML documents should be View Sourcier now than ever before. (CSS, not so much, with compression also likely to grow.) If I had to choose between one or the other, I’d take clear HTML over clear Javascript. Also, we will probably find the majority of sites in the long tail won’t feel the need to do anything about their code (but the ones who do make efforts are probably the ones with the most interesting things to look at). Also, the aforementioned tools, which do things like XHR sniffing, will help us to understand from a “black box” perspective even if we can’t peer into the code. Hopefully, there will also be more attention paid towards Javascript beautifiers as well, to make sense of compressed code.

I can’t speculate on the waning of View Source without mentioning the tremendous counter-balancing act played out by Open Source. From the get-go, open source has played a vital role in Ajax, with individuals and companies releasing code for all sorts of reasons. Most of this is library and framework code, rather than production-ready applications, so we might lose something there, but we still have much to gain from the ever-growing corpus of code that’s out there, free to be studied and incorporated into our own applications.

Posted by Michael Mahemoff at 10:40 am
24 Comments

+++--
3.3 rating from 47 votes

24 Comments »

Comments feed TrackBack URI

I don’t see “View Source” going away any time soon, as static code/markup is still absolutely critical for content. That said, Firebug’s “Net” tab and resource panel in Safari etc. are of increasing usefulness given the growing number of requests made via JS for JS, and via XHR.
 
JS obfuscation is generally ineffective, typically reserved for viruses and exploits, and given DOM inspectors, code prettifiers and JS debugging tools, developers can still unravel what any script is ultimately doing as it runs.

Comment by Schill — December 3, 2009

I truly hope this is not the case—the web exists because of view source. And as Schill said, with Firebug and the Web Inspector we are getting everything decoded and prettified for us anyways.

I completely disagree with those companies who believe that obscuring their code is a good way to go. Yes, intellectual property is important but there is a certain level of openness about the web and we cannot abandon that.

Comment by thomasjbradley — December 3, 2009

I have to agree with Thomas. We all got here (including Google’s applications) by standing on the shoulders of giants and learning for the newest tips and tricks of the day. I think that a universal adoption of these tactics would result in a serious slow-down in innovative web development techniques from almost all sources outside of companies like Google. I’d find it difficult to explain the philosophical differences between Closure and Java Applets, for example.

Personally, I would have loved for openness and transparency to be another goal of projects like Closure. Speed is important but it’s not the only important factor. While those projects seem to be driven solely by performance, I find myself curious but still reluctant to adopt them. I’m still viewing source on many other sites looking for the next big thing…

Comment by AndreCU — December 3, 2009

Someone on the CommonJS Google Group the other day suggested using an X-ViewSource http header for a pointer to the source for server-side js apps. Perhaps a similar convention could be adopted for sites with compressed/obfuscated js.

Comment by smith — December 3, 2009

It’s probably not a good idea to allow current leaders, possessing the inevitable monopolization mindset, to define how the interweb works. It’s a simple point: whenever you start deciding what is open or what, what is allowed and what is not, what is visible and what is not, based on the needs (read: desires) of a profit-focused corporation you have intentionally accepted the crusty end of a “bread and circuses” arrangement. Companies that obscure their code gain profit and lose connection to the community. That is their right. The community is not *wrong* for preferring, and defending, open code sharing. And to even suggest that capitulation should result in you being given a high paying job in a comfy building at this or that monopoly where your greed can be sated and your voice can be silenced.

Comment by nataxia — December 3, 2009

@smith – That is a GREAT idea.

Comment by Malic — December 3, 2009

Since this article talked about both compression and intellectual property, where does the compromise happen for JavaScript files that are supposed to have some license code contained therein that then gets stripped out during the compression process? Right now I don’t think most people case, but you know at some point it is going to come up.

Comment by blepore — December 3, 2009

Interesting opinion. However I do not so well understand English, therefore I want to ask again – it is necessary to prefer JavaScript or HTML?

Comment by Zmicer — December 3, 2009

Schill
“JS obfuscation is generally ineffective, typically reserved for viruses and exploits” – it’s just not true. Many compression techniques modify the code in order to make it shorter. Most obvious – shorten the names of internal variables and methods, but there is a lot more – you can remove some parantheses and semicolons in certain contexts, replace repeating strings with variables etc. In my experience – js minification shaves ~30%, compression – another ~20%. The drawback is unreadable code but the gain – ~50% smaller file size.

Let’s not forget that the code is and will remain viewable – hardcore developer will be able to sift through it to find that nifty feature no matter what. Simple copy-paste – yeah, that might go. But I don’t really believe that it is widely used even now.

Comment by jx12345 — December 3, 2009

This is a natural and necessary trend, especially with things like Javascript and CSS, which, if left in their source form, can become intractably large (you can argue that they should be small, but that’s essentially an argument against dynamic applications in general). No reasonable developer uses obfuscation to “protect their code”; it’s an optimization (over the wire and at parse time) plain and simple.

Insisting on clear unobfuscated scripts is an argument that all users should pay (in time) for your desire to view the source of an application. I believe that’s a completely unreasonable insistence, and it’s never really been guaranteed anyway. What matters is sharing code and ideas (through open discussion and open source), not the “view source” mechanism for doing so.

Comment by jgw — December 3, 2009

Just add a new attribute to the script tag: VIEW

[script src='minified.js' view='complete.js' /]

simple, easy, compliant

Comment by Ajaxerex — December 3, 2009

Problem solved: http://jsbeautifier.org/

Comment by WillPeavy — December 3, 2009

@jx12345: Compression/minification via YUI compressor et al, removing whitespace and shortening local variables etc., is a different beast than obfuscation where scrambling and hiding the meaning of the code, methods and so on is the end goal.
 
Compression/minification (+gzip when served) is excellent, but also still leaves readable code – something I consider a good thing – when run through tools like jsbeautifier as WillPeavey mentioned, vs. obfuscation as in my prior linked virus/exploit example where it’s still very unclear as to what the code actually does.

Comment by Schill — December 3, 2009

@Ajaxerex
But not usable. If a build mechanism that automatically glues together files and compresses them is used, then there may never be a single readable file to link to as the uncompressed version. Many uncompressed -> single compressed. You can put links to these files in your projects now if you like – no need for extra attributes. The web already has what you need.

@Schill
I agree – deliberate code obfuscation is evil. And evil sites use it. I don’t think it is the point Michael Mahemoff was trying to make in this article. He talks about legitimate code compression.

The only possible legitimate reason for obfuscation (at first glance) could be to protect some proprietary technology. If I were to protect million dollar algorithms I would not put them on the web in any shape or form. Just prepare the data on the server, make an xht request and everyone’s happy. Javascript in the browser should not be used for such code – keep that locked away.

Comment by jx12345 — December 3, 2009

@jx12345

Ok, you got me there.

How about separating urls with commas or pipes?

Comment by Ajaxerex — December 3, 2009

It’s not clear to me that minification/packing gains you much over just Gzipping the source. I can understand gluing files together to reduce http overhead, but I just don’t get what your compression/obfuscation stuff is useful for. Where are the stats? If this article is to be believed, we should see the reasoning behind the big guys switching over to code compression/obfuscation, or a refutation of that reasoning if the evidence is against it. Otherwise it’s just scaremongering, and what is the point of that?

Comment by Breton — December 3, 2009

First of all, obfuscation in the web is mostly for performance reasons.

Second of all, I think the major argument why ‘View Source’ will die in this article is because of all these performance tweaking which makes the source basically unreadable.

However, if you only read/heard about these performance tweaking, then that might cause for alarm for you. But if you’ve actually implemented them, you will know that there’s no way you’re going to sacrifice maintainability just to shave of a few seconds/milliseconds (who wants to code in compressed, obfuscated, & joined form?).

Most likely, you’ll figure out a way so that in your development environment, everything is ‘viewable’, but in production, things are tweaked to high performance.

Comment by franzsee — December 3, 2009

I don’t think view source is as big a deal as it used to be. Open source has come a long way since the old days. No longer is it really necessary to crack a website open to see how it works just to learn html. There are so many resources out there to share knowledge, help newbies, and build great apps quickly and easily, that I fail to see that many cases when it is really that important. Sure, there are times when I want to see why somebody else’s site doesn’t work in my browser, or to see how somebody else pulled off something cool. However, I can also see how applications that do not wish to share their source has every right to obfuscate their code in any way they please.

Let’s stop focusing on view source. That may have been an important foundation for how the open web came to be, but it is not at all the most important factor now. I say open source and open standards are far far more important than view source. I would also say that as a platform, having source based distribution over binary distribution is also key – even if the source is obfuscated in some way. Staying at the source code level is important for allowing the widest range of implementations, and the furthest reach.

Comment by genericallyloud — December 4, 2009

Am I the only one to find it disturbing that Google might favour big sites in their result just because they are more performant? Those are already the ones who have the money and time to spend on SEO in the first place, who have a need for performance because Google probably brings them loads of visitors already and also have the time and money to really squeeze performance out of their pages… what about the little guys? I think it is more than time that someone builds a search engine that considers content value over accessory indicators like performance or number of inward links! Semantic web anyone?

Comment by GBUK — December 4, 2009

I use GWT and by default the compiler obfuscates and minimises the output for performance reasons. As jx12345 pointed out, there isn’t a ‘pretty’ version to link. In my case, you’d get the precompiled Java source code anyway, which probably isn’t all that useful.
.
Sure, i could GWT compile a ‘pretty’ version, but why go through that extra work and maintenance? I deliver apps to add value to my company and that are as performant as possible for my end users, not to educate other web developers. There is certainly a time and place for that, but production apps aren’t it.

Comment by abickford — December 4, 2009

Talking about Closure. For those who haven’t read that :
http://www.sitepoint.com/blogs/2009/11/12/google-closure-how-not-to-write-javascript/

Comment by guix69 — December 4, 2009

@guix69: that article is full of pointless vitriol and many of its optimizations have been refuted elsewhere (check the comments). At the least, it has nothing useful to say on the topic at hand.

Comment by jgw — December 6, 2009

‘View Source’ isn’t going anywhere.

Comment by Gavin — December 7, 2009

excellent
thanks

Comment by Aphrodisiac — January 15, 2010

Leave a comment

You must be logged in to post a comment.