You searched for: 'native'

Tuesday, September 9th, 2008

Open Web Podcast: Allen Wirfs-Brock, and Pratap Lakshman from Microsoft on ECMAScript, IE 8, and more

Category: OpenWebPodcast

Over on the Open Web Podcast the crew interviewed Allen Wirfs-Brock, and Pratap Lakshman from Microsoft on ECMAScript, IE 8, and more. It was a real pleasure to chat with these guys as they talked on a variety of issues, even including their thoughts on business models on the Web!

Allen Wirfs-Brock is the standards guy from Microsoft who sits and works on ECMA. Pratap Lakshman is from the JScript team, and works on the ECMAScript 3.1 committee.

They were gracious enough to joined us on the call to discuss the recent new around ECMAScript Harmony, how Microsoft feels about it, work that is being done in IE 8, performance, and tangents into ideas behind the Open Web.

You can download the podcast directly (OGG format too), or subscribe to the series, including via iTunes).

When a beta of IE 8 comes out, we all download it quickly to find out what was added, what wasn’t, and also making sure that our tricks weren’t taken away!

The guys talked about the Object.defineProperty support added in IE8b2. Allen clarified the implementation details on how this has been added to hosted DOM objects and not JavaScript “native” objects. The team has been working on “end to end performance” issues, and have done a lot of work on the DOM. They also mentioned how we should expect a new set of technology to run JavaScript in the future. It has to happen, they have to join the new performance world with TraceMonkey, V8, and SquirrelFish (Extreme).

The better environments to run JavaScript dovetail nicely with the ability to have JavaScript become more self-hosting, which was discussed in some depth. They also mentioned the goal in IE8 to have JavaScript developers not requiring to do special work for IE, and a bunch of bugs have been fixed around this core issue. What about core DOM event support? John brought up that with the addition of DOM prototypes, that this could be added by libraries, and that this hook could be used for a lot of good.

We were also led into discussing the disconnect between the ECMA standard and the W3C standard, primarily the DOM and JavaScript. Pratap was a little disturbed that the ECMAScript spec only had a few words on DOM, and some banter occurred around the role of JavaScript as being the One True Open Web language, or whether there is a place for the polyglots.

Other recent podcasts

Posted by Dion Almaer at 10:25 am
Comment here

+++--
3.9 rating from 10 votes

Wednesday, September 3rd, 2008

Adding Custom Tags To Internet Explorer, The Official Way

Category: Browsers, HTML, IE

There have been some clever tricks to create new custom tags in Internet Explorer, such as the createElement trick. However, I never realized that Internet Explorer itself provides a facility to define new tags in the markup and have them styled, since Internet Explorer 5!

Some details from the MSDN documentation on this feature, titled “Using Custom Tags In Internet Explorer”. The trick lies in making sure you namespace things. For example, in the MSDN docs for this feature the example of creating a new JUSTIFY tag is given:


<HTML XMLNS:MY>
<MY:JUSTIFY>
   This paragraph demonstrates sample 
   usage of the custom MY:JUSTIFY tag in a document. 
   By wrapping the paragraph in start and end 
   MY:JUSTIFY tags, the contained
   text is justified within the specified width. Try 
   resizing the window to verify that the
   content is justified within a 500-pixel width. 
   And don't forget to right-click anywhere on the
   window and "View Source".
</MY:JUSTIFY>

You can even style this with CSS!


<HTML XMLNS:MY>
<STYLE>
@media all {
   MY\:JUSTIFY  { text-align:justify; width:500 }
}   
</STYLE>

The docs then go on to discuss applying an Internet Explorer ‘behavior’ to this custom element to give it expanded abilities:

Custom tags become much more interesting when applied with a DHTML behavior. Introduction to DHTML Behaviors (or behaviors) are applied to elements on a page, the same way styles are, using cascading style sheets (CSS) attributes. More specifically, the proposed CSS behavior attribute allows a Web author to specify the location of the behavior and apply that behavior to an element on a page.

Using DHTML in Internet Explorer 4.0.0, it is possible to create simple animated effects, such as flying text, by manipulating the position of elements on a page over time. Beginning with Internet Explorer 5, this functionality can be encapsulated in a behavior, applied to a custom <InetSDK:FLY> tag, and wrapped around blocks of text on a page. This can be applied to cause text to fly from various directions.

I’m going to do more testing on this functionality today to see how deep it goes, but if true it makes it easier to create browser shims for Internet Explorer for things like SVG, MathML, etc., including HTML 5 (if we namespace the HTML 5 elements, required to get this to work).

The reason I’m looking for an alternative to the createElement trick is I’ve found that it doesn’t work with nested custom tags, which limits its usefulness. For example, I’ve found the following to not enter the DOM correctly:

  1. <html>
  2. <body>
  3.   <custom_container>
  4.      <custom_child></custom_child>
  5.   </custom_container>
  6. </body>
  7. </html>

Posted by Brad Neuberg at 1:30 pm
18 Comments

++---
2.6 rating from 27 votes

Drag and drop via sneaky Textarea hack

Category: Canvas, JavaScript

Ernest Delgado put together work from an earlier project, and the realization that textareas are native drop targets, to create Drag and Drop without Drag and Drop.

Something that I never realized before is that text areas are drop targets by default. Using this property alone (without registering drag events on the source elements), we can emulate drag and drop behavior of non-linked images between different documents.

Put together the layers, with canvas, and a hidden transparent text area, and you get the following architecture:

Then, check out the demo that allows you to drag between gadgets. Take images from the right hand magnet, and paste them onto the fridge!

Posted by Dion Almaer at 7:02 am
8 Comments

+++--
3.3 rating from 27 votes

Tuesday, September 2nd, 2008

Google Chrome, Chromium, and V8

Category: Browsers, Google

It is really exciting to see the level of pace that browsers have been setting recently, especially with respect to performance.

I have been able to keep in sync with Google Chrome the new browser, and Chromium, the open source code-base it comes from. There are a couple of innovations that have been great to see such as the multiple process model for tabs and windows, the unified tab and search functionality, and at the core, V8.

V8 is the super-speedy JavaScript VM by Lars Bak of Sun HotSpot fame. When you run JavaScript benchmarks on this puppy, you see very speedy responses indeed. The V8 part of the comic says it well:

The breakthrough of having hidden classes to look at structures and work out the shared information (e.g. object Foo looks like a Person). Once you have that data, you can optimize in the same way you would class systems. V8 improvements consist of:

  • Compiler: Instead of using interpretation, JavaScript gets compiled down to native code
  • Inline caching: Optimize for accessing, and function calling
  • Very efficient memory management system

What version of JavaScript? “V8 implements ECMAScript as specified in ECMA-262, 3rd edition, and runs on Windows XP and Vista, Mac OS X 10.5 (Leopard), and Linux systems that use IA-32 or ARM processors.”

If you are interested in the benchmark suites, you can run them and check them out.

Some of the core technology is also exciting to geeks. For example, as this code is operating system neutral we use the Skia Graphic Library (SGL) used by the Android team.

What about the process manager? John Resig has interesting thoughts on that with the rub: “The blame of bad performance or memory consumption no longer lies with the browser but with the site.”

Alex Russell also has some good thoughts on the importance of Chrome, and Christopher Blizzard (Mozilla) also has some thoughts on how this shows the browser market is strong.

This is all great to see. Not only is this just the beginning for Google Chrome, Chromium, and V8 (I am dying for a Mac version!), but the other browsers are keeping pace too. The end result is a better Web for users, and a higher quality of product for developers to build against!

Posted by Dion Almaer at 2:00 pm
49 Comments

++++-
4.5 rating from 60 votes

Wednesday, August 27th, 2008

Internet Explorer 8 Beta 2 and Web Standards

Category: Announcements, Browsers, IE

Internet Explorer 8 Beta 2 was released today. There are several cool UI enhancements that this beta brings to the table that I won’t cover in this post, but you can learn more about them on the IEBlog. Instead, I want to talk about how beta 2 affects IE’s relationship to web standards.

First, CSS Expressions are no longer supported in Standards Mode:

Also known as ‘Dynamic Properties’, CSS expressions are a proprietary extension to CSS with a high performance cost. As of Internet Explorer 8 Beta 2, CSS expressions are not supported in IE8 standards mode. They are still supported in IE7 Strict mode and Quirks mode for backward compatibility.

In case you don’t know, CSS expressions were actual bits of JavaScript that you could run from CSS rules; this was commonly used to simulate the CSS max-width property for IE:

  1. div.someClass {
  2. /* Internet Explorer */
  3. width: expression(document.body.clientWidth &gt; 600) ? "600px" : "auto";
  4. /* Standards-compliant browsers */
  5. max-width: 600px;
  6. }

IE 8 beta 2 also now supports alternate style sheets:

Internet Explorer 8 now supports alternative style sheets as specified by HTML4 and CSS2.1. The alternative styles that are defined by the Web page author is available through the Style menu under the Page menu. The styles are also available through the Style menu under the View menu. The No Style option on either menu can be used to disable all authors styling.

In terms of the Known Issues with IE 8 Beta 2, the first is related to Ajax bookmarking and back button support and using window.location.hash to do cross-domain communication:

Internet Explorer 8 create entries in the travel log and back button for each instance of window.location.hash that is set. This is part of the behavior for Internet Explorer 8 AJAX Navigation. If you use this technique to communicate between documents, we recommend that you switch to the Internet Explorer 8 Cross Document Messaging feature that is based on Section 6.4 of the HTML 5.0 specification.

Finally, there are some issues with the onload event for IE’s XDomainRequest object that helps with cross-domain communication:

The onload event may not fire reliably. We recommend you use the onprogress events which continues to fire as the data is received.

Unfortunately this is it for this release. You can see the full Beta 2 release notes, or download it yourself.

On a related note, IE 8 Beta 2 includes cross-site scripting attack (XSS) protection:

The XSS Filter operates as an IE8 component with visibility into all requests / responses flowing through the browser. When the filter discovers likely XSS in a cross-site request, it identifies and neuters the attack if it is replayed in the server’s response. Users are not presented with questions they are unable to answer – IE simply blocks the malicious script from executing.

Finally, PPK has also published a post on IE 8 Beta 2 and its changes.

Posted by Brad Neuberg at 5:59 pm
21 Comments

++---
2.3 rating from 33 votes

Tuesday, August 26th, 2008

navigator.geolocation: Using the W3C Geolocation API today

Category: Gears, Google, JavaScript, Mapping, Standards

Last week I wrote a simple WhereAreYou? application that used the Google Ajax APIs ClientLocation API to access your location via your IP address.

At the same time, we announced support for the Gears Geolocation API that can calculate your address using a GPS device, WiFi info, cell tower ids, and IP address lookups.

Add to all of that, the W3C Geolocation API that Andrei Popescu of the Gears team is editing. You will notice that it looks similar to the Gears API, with subtle differences. The ClientLocation API is quite different.

To make life easier, I decided to put together a shim called GeoMeta that give you the W3C Geolocation API, and happens to use the other APIs under the hood.

If you have the Geolocation API native in your browser (no one does yet, future proof!) that will be used. If you have Gears, that API will be used, and finally, with nothing the ClientLocation API will be used behind the scenes.

To you the API will look similar:

// navigator.geolocation.getCurrentPosition(successCallback, errorCallback, options)
navigator.geolocation.getCurrentPosition(function(position) {
      var location = [position.address.city, position.address.region, position.address.country].join(', ');
      createMap(position.latitude, position.longitude, location);
}, function() {
      document.getElementById('cantfindyou').innerHTML = "Crap, I don't know. Good hiding!";
});        

At least, that is what I would like. Unfortunately, there are a few little differences that leak through.

  • The W3C API only seems to give you a lat / long, so you have to do the geocoding to get address info
  • The Gears API gives you an additional gearsAddress object attached to the resulting position object. This can contain a lot of information on the resulting area (street address to city to …) however for certain providers the API returns that as null, the same as the W3C standard
  • That gearsAddress object has slightly different information from the address data that the ClientLocation API returns. NOTE: I would love to see this just called ‘address’ to help the shim.

To give you control when you need it, you can ask the navigator.geolocation object what type it is. navigator.geolocation.type will be null if it is native, but ‘Gears’ or ‘ClientLocation’ if a shim kicks in. You can also check navigator.geolocation.shim to see if it is augmented code.

Implementation

There is some fun implementation code in there if you poke around. For example, for the ClientLocation API, when you make a call, it will be added to a queue if the Google Loader hasn’t fully loaded yet, and it will kick off that call when finished. Dealing with dynamically creating <script src> as a loading mechanism sure is fun!

I like the idea of jumping straight to the W3C standard and updating the shim as the APIs change. That way, when browsers catch up, the code will still work using the native APIs and you don’t have to change a thing.

I also hope that we get general reverse geocoding in place, which would enable me to even take the native “standard” and strap on useful address info to the bare bones lat/long.

Where are you?

Posted by Dion Almaer at 9:14 am
7 Comments

+++--
3.6 rating from 30 votes

Friday, August 22nd, 2008

JavaScript JIT: The Dream Gets Closer (in Firefox)

Category: Firefox, JavaScript

For years, many of us have been salivating over the idea of JIT’ed JavaScript in the browser. Adobe’s JIT’ing Flash VM showed a preview of tremendous speed gains to be had, but we’ve had to wait until SquirrelFish from WebKit to see anything dramatic happen in the browser.

Until now.

Mozilla just let the cat out of the bag on their new TraceMonkey project. Brendan Eich, Mozilla’s CTO, describes it thusly:

I’m extremely pleased to announce the launch of TraceMonkey, an evolution of Firefox’s SpiderMonkey JavaScript engine for Firefox 3.1 that uses a new kind of Just-In-Time (JIT) compiler to boost JS performance by an order of magnitude or more. [Emphasis ours.]

There are charts and graphs all over the Web; here’s one from Brendan’s blog:

As Brendan points out, the benchmarks can be generally quite misleading; the best results are the demos. And, here’s a link to Mike “schrep” Schroepfer’s blog entry where he put one together in a screencast:

Brendan goes into significant detail on how all of this came about, and notes some key points:

* We have, right now, x86, x86-64, and ARM support in TraceMonkey. This means we are ready for mobile and desktop target platforms out of the box.
* As the performance keeps going up, people will write and transport code that was “too slow” to run in the browser as JS. This means the web can accomodate workloads that right now require a proprietary plugin.
* As we trace more of the DOM and our other native code, we increase the memory-safe codebase that must be trusted not to have an exploitable bug.
* Tracing follows only the hot paths, and builds a trace-tree cache. Cold code never gets traced or JITted, avoiding the memory bloat that whole-method JITs incur. Tracing is mobile-friendly.

For even more details, check out Andreas Gal’s detailed blog entry on trace trees.

The first phase of Ajax has been all about leveraging the existing platforms as much as we can. This announcement is a major signpost towards the second phase: improving the existing platforms. We couldn’t be more excited.

Brendan puts it in his own understated way:

JS-driven <canvas> rendering, with toolkits, scene graphs, game logic, etc. all in JS, are one wave of the future that is about to crest.

John Resig voices similar sentiments in his excellent blog post on the subject:

[This] means that JavaScript is no longer confined by previously-challenging resource of processing power… I fully expect to see more massive projects being written in JavaScript…

The primary thing holding back most extensive Canvas development hasn’t been rendering – but the processor limitations of the language (performing the challenging mathematical operations related to vectors, matrices, or collision detection). I expect this area to absolutely explode after the release of Firefox 3.1 as we start to see this work take hold.

Speaking of Canvas and JS, we’ve got our own little project we’ve been hacking this year that we can’t wait to try this on… way to go, Mozilla!

Posted by Ben Galbraith at 6:19 pm
16 Comments

++++-
4.7 rating from 64 votes

Monday, August 18th, 2008

A simple solution to the “other” problem with select boxes

Category: Examples, JavaScript, jQuery, Tip

Jeffrey Olchovy has posted a simple tutorial on using jQuery to solve a “select-to-input toggle” that shows and hides a text field when you select “Other”. It overloads the same form name, so the server side gets just one value, and doesn’t know or care if it was in the drop down or typed in. You can try it live here.

This is a simple little solution to the issue that there isn’t a native control to really do the job. What you really probably want here is the ability to drop down and select items, or just type into the select box field itself. This is one reason why people implement auto-complete text fields instead of using select boxes for this case, but wouldn’t it be nice to be able to tag your <select> and be done?

Posted by Dion Almaer at 5:51 am
10 Comments

+++--
3.3 rating from 47 votes

Wednesday, August 13th, 2008

More on codecs: Apple’s view, and the BBC makes a move

Category: Sound, Standards

We just talked about codecs, and in particular the world of Ogg.

Mozilla came out supporting the format, and saying that we should see it in Firefox 3.1. Niall Kennedy then reminded me of a post, way back in time, by David Singer of Apple discussing the research that Apple did into Ogg:

Preamble

The HTML5 specification contains new elements to allow the embedding
of audio and video, similar to the way that images have historically
been embedded in HTML. In contrast to today’s behavior, using
object, where the behavior can vary based on both the type of the
object and the browser, this allows for consistent attributes, DOM
behavior, accessibility management, and so on. It also can handle
the time-based nature of audio and video in a consistent way.

However, interoperability at the markup level does not ensure
interoperability for the user, unless there are commonly supported
formats for the video and audio encodings, and the file format
wrapper. For images there is no mandated format, but the widely
deployed solutions (PNG, JPEG/JFIF, GIF) mean that interoperability
is, in fact, achieved.

Licensing

The problem is complicated by the IPR situation around audio and
video coding, combined with the W3C patent policy
. “W3C seeks to
issue Recommendations that can be implemented on a Royalty-Free (RF)
basis.” Note that much of the rest of the policy may not apply (as
it concerns the specifications developed at the W3C, not those that
are normatively referenced). However, it’s clear that at least
RF-decode is needed.

The major concerns were:

  • a number of large companies are concerned about the possible
    unintended entanglements of the open-source codecs; a ‘deep pockets’
    company deploying them may be subject to risk here;
  • the current MPEG codecs are currently licensed on a royalty-bearing basis;
  • this is also true of the older MPEG codecs; though their age suggests examining the lifetime of the patents;
  • and also SMPTE VC-1
  • H.263 and H.261 both have patent declarations at the ITU.
    However, it is probably worth examining the non-assert status of
    these, which parts of the specifications they apply to (e.g. H.263
    baseline or its enhancement annexes), and the age of the patents and
    their potential expiry.
  • This probably doesn’t have significant IPR risk, as its wide
    deployment in systems should have exposed any risk by now; but it
    hardly represents competitive compression.
  • Most proprietary codecs are licensed for payment, as that is the
    business of the companies who develop them.
  • So, there was worry. The BBC decided to try to solve this by creating Dirac, but they also just posted on Open Industry Standards For Audio & Video On The Web where they put their money behind H.264 and AAC:

    I believe that the time has come for the BBC to start adopting open standards such as H.264 and AAC for our audio and video services on the web. These technologies have matured enough to make them viable alternatives to other solutions.

    And then answer the obvious question on Dirac:

    Some people may ask: why are you not using your own Dirac codec? I am fully committed to the development and success of Dirac, but for now those efforts are focused on high-end broadcast applications. This autumn, we intend to show the world what can be achieved with these technologies.

    Something tells me that 2008 is going to be a fun one wrt the opening of codecs.

Posted by Dion Almaer at 7:16 am
5 Comments

+++--
3.5 rating from 11 votes

Sunday, August 10th, 2008

On Fighting the Web; The invitation

Category: Editorial

But fighting the web is like holding back the ocean; it will route around you or it will wear you down, but will never go away, and it will never tire or give up. Yet in spite of the futility of fighting the web, Silverlight is being positioned in opposition to the web, not in support of it

This is one of a few great quotes from DeWitt Clinton’s post On Fighting the Web itself. DeWitt is a colleague at Google, one that I have shared offices with, and great conversations. He has very strong ethics, but at the same time is very practical. But, back to his writing.

This is not a post saying “the Open Web rules and the proprietary Web is evil”. If you actually read this carefully you see a very interesting argument that covers:

We can’t be blind:

The short answer is that the technology behind Silverlight, and most certainly the company creating it, has the potential of changing how the web itself works.

The Web has strengths, but man it is tough to work with:

If you’re a web developer then you’ve felt the acute pain involved in writing applications inside the browser. Even armed with the most state-of-the-art toolkits, such as jQuery, Dojo, etc., you’re still limited to the available runtime of HTML, CSS, and JS, and worse, the absolute morass of cross-browser incompatibilities and restricted access to native client-side capabilities. I remain in awe of what people have accomplished in this environment, but I’m sad that this is all we’ve been able to accomplish so far.

Man, if the client is involved… evolution is slooooow:

The web revs slowly. Very, very slowly. In 10 years we’ve seen virtually no meaningful advances in the the ubiquitous web client; just a painful slog forward as web developers learn to eek out just a little more functionality in a constrained environment. Progress is slow because revving the ubiquitous client requires the coordination of multiple parties, not all of whom have shown consistent interest in working together to move the web forward.

There is some hope for an Open Web-style speedup:

More recently we’ve seen some earnest attempts at breaking that cycle. Rather than wait for the entire web to catch up, projects like Gears seek to rev the client from the inside out. It may take several years for standards like HTML5 to be widely deployed, but if developers can gain a toehold inside the client and start forcing the issue immediately then we’ll quickly see what works and what doesn’t, and be that much more informed about what to standardize and adopt as part of the long-term web platform.

The proprietary folks have a huge advantage, as they can just innovate and run without getting consensus:

But there’s another approach, an approach best exemplified today by the Flash runtime, whereby one doesn’t seek to improve the web from the inside, but rather replace it entirely. Sure, technologies like Flash take advantage of the web via http-based delivery mechanisms and in that they run inside the browser, and yes, they can use network connections like anything else, but these alternate runtimes fundamentally divorce themselves from the web ecosystem, and in doing so gain a significant advantage, but at a cost.

In spite of circumventing the web — no, because they circumvent the web — these new runtimes have the potential of offering a far better developer experience, and hence, a far better user experience, then the least-common-denominator of the standard widely-deployed ubiquitous browser runtimes of today.

And, thus, the proprietary stuff can be very good indeed:

Which leads us to Silverlight: Silverlight is positioned to take the fork-and-forget approach to the web pioneered by Flash and bring to it an unprecedented wealth of technology and corporate might. With a better underlying runtime and VM, better tool support, far superior multi-language capabilities, and more marketing muscle, Silverlight has all the potential to make rapid and noticeable inroads over the next several months, cleaving a large section clean out of the web.

And the scary thing? That this isn’t entirely a bad idea. The browser itself is anemic, the dependency on a single language is a handicap, the security models simultaneously constricting and flawed, the development environments underpowered, and frankly, the whole ecosystem is deserving of a major disruption. We’ve lived too long thinking that what we have today is good enough.

And will get better, fast:

Granted, these technologies won’t be perfect at first. On the contrary, they might be slow, cumbersome to deploy, buggy, and feature deprived. But right now that doesn’t matter. The strategy is all about getting a wedge in place, a bit of leverage that can be used to further pry open a vector for escaping the existing ecosystem. And over time, as the technology improves and adoption grows, so will the size of that tear in the fabric of the web.

But, there is a reason why the Web does so well:

But fighting the web is like holding back the ocean; it will route around you or it will wear you down, but will never go away, and it will never tire or give up. Yet in spite of the futility of fighting the web, Silverlight is being positioned in opposition to the web, not in support of it.

Why in opposition to the web? This stems from the principle that the web is axiomatically defined as an open system, where the underlying technologies are resistant to the centralization of control, where the protocols and formats are extensible and malleable, and where the power to effect change is shared and distributed. The DNA of the web is one of ceding control, and of learning to work with, rather than against, the collective wisdom and power a larger community.

Whereas a development monoculture, a centralization of control, and a tight grasp on the ability to change and adapt, all stand against these basic ideals, and give rise to the forces that, given enough time, will erode and eat away at any temporary advantage gained.

A violation of these principles does not necessarily make for a bad technology, but it does make it something other than the web.

And finally, and this is so key, the answer isn’t to try to destroy the innovation in Flash, Silverlight, and others. Instead, the biggest win will be for us to make technology from those worlds into the Web itself. If we can do that, I think it will be a win-win, and we will have a much better Web to show for it:

But the call to action here is not to go and try to fight the disruptive technology. On the contrary, the ideas are sound and the improvements are very much needed. No, the call is to discover ways in which these ideas can become a part of the web, rather than working to tear it apart.

I do not want to see ambitious attempts like these fail. Just the opposite — I want to see them succeed. But success on the web requires a different kind of DNA, the type of DNA that is difficult to adopt when one’s attention is focused on fighting the web itself.

Posted by Dion Almaer at 12:57 am
11 Comments

++++-
4.2 rating from 25 votes

Thursday, August 7th, 2008

This Week in HTML 5: Mark Pilgrim’s new blog series

Category: HTML

I am really jazzed about the first entry in a new series on HTML 5. Mark Pilgrim (of Python, Greasemonkey, Open Web, writer extraordinaire, and creator of Google Doctype) has started the series This Week in HTML 5 which aims to keep us up to speed on the spec, and progress across the board (what are browsers implementing etc).

In the first episode he discusses the progress with workers, interesting clarification around providing text instead of images, and more.

Anyway, over to Mark:

The biggest news is the birth of the Web Workers draft specification. Quoting the spec, “This specification defines an API that allows Web application authors to spawn background workers running scripts in parallel to their main page. This allows for thread-like operation with message-passing as the coordination mechanism.” This is the standardization of the API that Google Gears pioneered last year. See also: initial Workers thread, announcement of new spec, response to Workers feedback.

Also notable this week: even more additions to the Requirements for providing text to act as an alternative for images. 4 new cases were added:

  1. A link containing nothing but an image
  2. A group of images that form a single larger image
  3. An image not intended for the user (such as a “web bug” tracking image)
  4. Text that has been rendered to a graphic for typographical effect

Additionally, the spec now tries to define what authors should do if they know they have an image but don’t know what it is. Quoting again from the spec:

If the src attribute is set and the alt attribute is set to a string whose first character is a U+007B LEFT CURLY BRACKET character ({) and whose last character is a U+007D RIGHT CURLY BRACKET character (}), the image is a key part of the content, and there is no textual equivalent of the image available. The string consisting of all the characters between the first and the last character of the value of the alt attribute gives the kind of image (e.g. photo, diagram, user-uploaded image). If that value is the empty string (i.e. the attribute is just “{}“), then even the kind of image being shown is not known.

  • If the image is available, the element represents the image specified by the src attribute.
  • If the image is not available or if the user agent is not configured to display the image, then the user agent should display some sort of indicator that the image is not being rendered, and, if possible, provide to the user the information regarding the kind of image that is (as derived from the alt attribute).

Great to see this series kick into gear, and having Mark keep us in the loop on the very important HTML 5 effort.

Posted by Dion Almaer at 11:39 am
Comment here

+++--
3.8 rating from 21 votes

CSS variables considered harmful?

Category: CSS, Standards, W3C

Bert Bos, a W3C fellow, thinks that CSS variables are to be considered harmful:

Adding any form of macros or additional scopes and indirections, including symbolic constants, is not just redundant, but changes CSS in ways that make it unsuitable for its intended audience. Given that there is currently no alternative to CSS, these things must not be added.

He has some very compelling points in there, some of which I agree with, and others that I don’t:

This PHP version proves that it is not necessary to add constants to CSS. Just like the existence of the WebKit implementation cannot be taken as proof that constants in CSS are useful, so the PHP implementation cannot prove that either. But the PHP implementation has the benefit of letting authors determine the usefulness for themselves, without modifying CSS on the Web.

You can obviously use pre-processors to do many macro situations. This doesn’t mean that it is the right place for functionality like this. I don’t want to force every CSS request through PHP. I also like having functionality in CSS itself, as that can be shared across projects and developers without “oh, and by the way this looks a little different as we pre-process it with a magic Rails action”. Something as important as variables should be low level IMO.

It is quite likely that somebody who is trying to learn CSS will give up learning it when he sees that style sheets that occur on the Web don’t actually look like the tutorials he started from. Difference in upper- and lowercase or in pretty-printing are hindrances to learning, too, but limited ones: you soon learn to ignore those differences. But symbolic constants are different in each style sheet and have to be interpreted and understood each time anew.

A touch too strong. Things change. Tutorials get out of date. C’est la vie. And, you can always use old form…. and the feature is so simple that it won’t take you weeks to work it out!

People who understand CSS in principle may still have trouble understanding the indirection provided by the em unit, many have trouble with the advanced selectors in level 3, and many more won’t understand, e.g., any of the properties that have to do with bi-directionality or vertical text. For each feature that is added to CSS there must be a careful balance (based on an informed guess, because these things are difficult to test) between the number of users that will be excluded by that feature and the number for whom it is essential that it be added.

I agree in the balance. There are some crazy complicated parts of CSS. However, simple variables is trivial in comparison! Also, I think it will be used a hell of a lot more frequently.

Treating symbolic constants as an independent module would make them available for use in other contexts than CSS, would make them available to precisely the people who need them without hindering other people, would allow them to be developed without impact on CSS, and allow them to be developed more easily into what they are sure to develop into anyway: full macros, able to replace anything, not just CSS values, and able to make files shorter instead of longer.

I agree about modularity. There will be issues with scope and such, and we will have to fight to keep the system as simple as possible.

However, I can’t wait to be able to re-skin something by changing the variables only that represent semantic parts of my application. Search and replace isn’t good enough. Having tools that see the #xxxxxx and show you a color aren’t good enough. You end up reading a lot more CSS than you write, so we should help that use case. I don’t think CSS should become a programming language with if statements and the like, and I doubt this is a gateway drug to that.

Posted by Dion Almaer at 8:06 am
24 Comments

++++-
4.2 rating from 23 votes

Wednesday, August 6th, 2008

iPhone Safari Flick Navigation By Example

Category: iPhone, Mobile

Matthew Congrove took some time to play with the iPhone SDK, but it wasn’t his bag, so he decided to go back to building a Web application for the iPhone, and was pleasantly surprised with the updates to Safari that enabled new things:

In the midst of all my research for help I stumbled across something that I, like most, had completely forgotten about; the iPhone update wasn’t just for native third-party applications, but it also upgraded the existing applications. Yes, that includes Safari. The upgrade for the iPhone’s on-board browser added in support for CSS animations and transitions, a JavaScript accessible database, a few new DOM selectors and more. For me this meant that the myDailyPhoto web application could look and feel more like it was a native Cocoa Touch enabled experience. As soon as the idea crossed my mind I sat down to churn out this little test app.

To get the flick effect Matthew wrote the following CSS:

  1. .divSlide {
  2.         -webkit-animation-name: "slide-me-to-the-right";
  3.         -webkit-animation-duration: 1s;
  4. }
  5. @-webkit-keyframes "slide-me-to-the-right" {
  6.         from { left: 0px; }
  7.         to { left: 100px; }
  8. }

Posted by Dion Almaer at 11:20 am
7 Comments

++++-
4.3 rating from 58 votes

Tuesday, August 5th, 2008

JavaScript Overlay Types in GWT

Category: GWT

Bruce Johnson of the GWT team has continued the deep dive into GWT with a posting on a new GWT 1.5 feature: JavaScript overlay types. This feature goes beyond the JNSI technique to “make it easy to integrate entire families of JavaScript objects into your GWT project. There are many benefits of this technique, including the ability to use your Java IDE’s code completion and refactoring capabilities even as you’re working with untyped JavaScript objects.”

The first example that Bruce gives is to mix JSON objects with Java:

javascript

  1. var jsonData = [
  2.   { "FirstName" : "Jimmy", "LastName" : "Webber" },
  3.   { "FirstName" : "Alan",  "LastName" : "Dayal" },
  4.   { "FirstName" : "Keanu", "LastName" : "Spoon" },
  5.   { "FirstName" : "Emily", "LastName" : "Rudnick" }
  6. ];
  1. // An overlay type
  2. class Customer extends JavaScriptObject {
  3.  
  4.   // Overlay types always have protected, zero-arg ctors
  5.   protected Customer() { }
  6.    
  7.   // Typically, methods on overlay types are JSNI
  8.   public final native String getFirstName() /*-{ return this.FirstName; }-*/;
  9.   public final native String getLastName()  /*-{ return this.LastName;  }-*/;
  10.    
  11.   // Note, though, that methods aren't required to be JSNI
  12.   public final String getFullName() {
  13.     return getFirstName() + " " + getLastName();
  14.   }
  15. }
  1. // the glue
  2.  
  3. class MyModuleEntryPoint implements EntryPoint {
  4.   public void onModuleLoad() {
  5.     Customer c = getFirstCustomer();
  6.     // Yay! Now I have a JS object that appears to be a Customer
  7.     Window.alert("Hello, " + c.getFirstName());
  8.   }
  9.  
  10.   // Use JSNI to grab the JSON object we care about
  11.   // The JSON object gets its Java type implicitly based on the method's return type
  12.   private native Customer getFirstCustomer() {
  13.     // Get a reference to the first customer in the JSON array from earlier
  14.     return $wnd.jsonData[0];
  15.   }
  16. }

Bruce then shows us some performance wins that you get, as GWT gets to do a lot of inlining:

A quick digression for compiler geeks. Another neat thing about overlay types is that you can augment the Java type without disturbing the underlying JavaScript object. In the example above, notice that we added the getFullName() method. It’s purely Java code — it doesn’t exist on the underlying JavaScript object — and yet the method is written in terms of the underlying JavaScript object. In other words, the Java view of the JavaScript object can be richer in functionality than the JavaScript view of the same object but without having to modify the underlying JS object, neither the instance nor its prototype.

(This is still part of the digression.) This cool wackiness of adding new methods to overlay types is possible because the rules for overlay types by design disallow polymorphic calls; all methods must be final and/or private. Consequently, every method on an overlay type is statically resolvable by the compiler, so there is never a need for dynamic dispatch at runtime. That’s why we don’t have to muck about with an object’s function pointers; the compiler can generate a direct call to the method as if it were a global function, external to the object itself. It’s easy to see that a direct function call is faster than an indirect one. Better still, since calls to methods on overlay types can be statically resolved, they are all candidates for automatic inlining, which is a Very Good Thing when you’re fighting for performance in a scripting language.

From this Java code:

  1. class MyModuleEntryPoint implements EntryPoint {
  2.   public void onModuleLoad() {
  3.     JsArray<customer> cs = getCustomers();
  4.     for (int i = 0, n = cs.length(); i < n; ++i) {
  5.       Window.alert("Hello, " + cs.get(i).getFullName());
  6.     }
  7.   }
  8.  
  9.   // Return the whole JSON array, as is
  10.   private final native JsArray<Customer> getCustomers() /*-{
  11.     return $wnd.jsonData;
  12.   }-*/;
  13. }

The compiler inlines away to get to the followinig (not obfuscated to see):

function $onModuleLoad(){
var cs, i, n;
cs = $wnd.jsonData;
for (i = 0, n = cs.length; i < n; ++i) { $wnd.alert('Hello, ' + (cs[i].FirstName + ' ' + cs[i].LastName)); } } [/javascript]

Posted by Dion Almaer at 8:10 am
Comment here

+++--
3.5 rating from 24 votes

Friday, August 1st, 2008

Another Jaxer 1.0 Release Candidate with new APIs

Category: Aptana

Greg Murray has blogged about a new release candidate for Aptana Jaxer that contains a lot of new features.

Kevin Hakman told us about the release:

We’ve had server-side JS database APIs all along, but now handing result sets is even easier. There’s also now full fine grain control and access to the entire communication cycle with APIs for message headers, redirects, content and types. Speaking of types… for the first time with Jaxer, you can return content types other than HTML including JSON, XML, GIF, etc… Yes, even GIFs. Jaxer has a fresh new Image API that among other things can convert Canvas to static images and serve them up. Like, Greg, I too really like the idea of using Jaxer for easily creating JSON data services which is a rapidly growing trend as developers discover the powerful capabilities of JSON more and more. In Jaxer, it’s very cool since it’s all native JavaScript on the client, on the wire, and on the server. There’s even enhanced JSON serialization to make it even easier than before on both client and server. JSON services also open Jaxer to be useful in combination with rich internet clients other than Ajax UIs such as Flash, Flex or even Silverlight since all those support JavaScript on the client and can consume JSON data. For Ajax and RIA developers this is a boon since you can now write your client-side and server-side code in the same language. And if you prefer XML data services Jaxer’s native E4X (ECMAScript for XML) support means you can handle XML docs natively in JS on Jaxer as well.

This release also includes a totally new concept: a secure sandbox which as Greg explains, “lets you load, on the server, pages from other domains and allow their JavaScript to execute without giving them access to the Jaxer API or your own server-side code, but still gives your code access to their window objects and anything inside them”. For anyone who has ever done screenscaping for mashups or other applications, this really helps a lot since Ajax pages have historically thwarted scraping operations. With this feature in Jaxer you can securely get a remote page, execute its functions, and scrape the resulting DOM nodes (yes, you need not do tedious manipulations with strings) and voila!

Here are the features:

  • Application context settings that allowing for easier app configuration, app properties, database settings, etc…
  • Database API enhancements with richer APIs for working with result sets.
  • Server-side image manipulation including server-side canvas support and ability to convert to other image types.
  • Native command execution API so that you can run system commands and handle the output from those.
  • Asynchronous server-side JavaScript processing lets you implement callbacks in your server-side code too.
  • Ability to return custom content types (e.g. json, xml, gif, html, etc…)
  • Full control of the request/response lifecycle including setting redirects, headers, content, etc…
  • Secure sandbox supporting cross domain calls, sandboxed JavaScript execution, META refreshes, …
  • Serialization support for JavaScript objects to and from XML, E4X and JSON.

Uri Sarid has a great post that shows how you can do DOM Scraping with Jaxer, and updates it for this latest release:

There’s a lot of other new goodness in Jaxer 1.0, as well as the official released version of the Mozilla engine found in Firefox 3. So for example getElementsByClassName is natively implemented (see John Resig’s speed comparison), in addition to the other Mozilla features such as built-in XPath functionality and a very robust DOM feature set — just what you need for some serious ‘screen scraping’, mashups, and content repurposing.

Let’s see it in action!

It includes code that shows the Sandbox in action, as well as the DOM work:

javascript

  1. // Gets a fragment of the remote page's HTML, after some cleanup  
  2. function getFragment(title, url, isClassName, identifier, classesToRemove)  
  3. {  
  4.     var sandbox = new Jaxer.Sandbox(url);  
  5.     var contents = sandbox.document[isClassName ? 'getElementsByClassName' : 'getElementById'](identifier);  
  6.     var container = addToPage(title, contents);  
  7.     if (classesToRemove)  
  8.     {  
  9.         if (typeof classesToRemove == "string") classesToRemove = [classesToRemove];  
  10.         classesToRemove.forEach(function(className)  
  11.         {  
  12.             removeNodeList(container.getElementsByClassName(className));  
  13.         });  
  14.     }  
  15.     return container.innerHTML;  
  16. }  
  17. getFragment.proxy = true;

Posted by Dion Almaer at 1:51 pm
7 Comments

++++-
4.2 rating from 32 votes

Thursday, July 31st, 2008

No Browser Left Behind… without Canvas

Category: Browsers, Canvas

Vladimir Vuki?evi? normally hacks on Mozilla products, but spent a little time on an experiment with IE. An experiment that looks very exciting indeed.

I love canvas, and wish that it was ubiquitous. We have great wrappers out there such as dojo.gfx, but wouldn’t it be nice if canvas worked everywhere? (and the full API to boot).

Well, Vladimir has an experiment to get it to IE. The approach is very interesting indeed. It isn’t like excanvas which uses VML… and there is a Silverlight bridge being worked on that looks promising. Instead, we have this:

I’ve been working on a native Canvas implementation for IE based on the same rendering core that’s in Firefox.

The same implementation, shoe horned into IE:

With an object tag, a bit of CSS, and (to work around another IE bug) a single line of script, <canvas> elements in HTML just work. I’m excited that this experiment is working out, because lack of Canvas support in IE is one of the reasons people skip Canvas and instead turn to Flash and other plugin technologies.
</canvas>

Congrats on a great hack, and here’s to you making it much much more.

Posted by Dion Almaer at 2:00 am
6 Comments

++++-
4.4 rating from 38 votes