Wednesday, October 17th, 2007

Dealing with the Flexibility of JavaScript

Category: Articles, JavaScript

Neil Roberts has written a piece on Dealing with the Flexibility of JavaScript which delves into functions that are overloaded based on signature.

For example:

javascript

  1. connect = function(/*...*/){
  2.   if(arguments.length == 1){
  3.     var ao = arguments[0];
  4.   }else{
  5.     var ao = interpolateArgs(arguments, true);
  6.   }
  7.   if(isString(ao.srcFunc) && (ao.srcFunc.toLowerCase() == "onkey")){
  8.     // ...
  9.   }
  10.   if(isArray(ao.srcObj) && ao.srcObj!=""){
  11.     // ...
  12.   }
  13. }

Commentors on the story had varied opinions.

Isabelle likes bridges:

javascript

  1. function clicked(event){ processById(event.target.id); }
  2. function processById(id){ }

Simon Willison and Dylan Schieman discussed how jQuery and Dojo do different things depending on not only arguments, but even the contents of strings that were passed in.

Posted by Dion Almaer at 7:47 am
6 Comments

++---
2.3 rating from 33 votes

6 Comments »

Comments feed TrackBack URI

So this is basically about creating functions/methods that act as a decision maker based on what is passed to it? Does that make it easier though?

I could easily create a “gatekeeper” function that can handle a wide array of things and do various actions on those things, but isn’t is just another layer of abstraction?

I would assume, by the example given, that the “gatekeeper” would just call the functions that it needs to. So what is the real advantage of having gateKeeper() call func1() for me,when I should just call func1() myself?

It is cool that this is possible, but is it really necessary? Maybe this is going over my head…

Comment by EmEhRKay — October 17, 2007

EmEhRKay, no it’s about avoiding the posted example above, and it asks the same questions that you just did.

Comment by Neil Roberts — October 17, 2007

If I’m thinking along the lines of this post I think I’ve seen something similar done with JSON:

doSomething({wait:500});
doSomething({callback:”something”});

function doSomething(options)…
if (options) {
if(options[“wait”] != null)
{ // run wait function }
if(options[“callback”] != null)
{ // run callback function }
}

Comment by Denny Ferrassoli — October 17, 2007

I don’t know about this. I’m more a fan of a well-defined API. I can see overloading being used to handle different data types that really do the same thing, but if I need a different number of arguments, chances are the function will be behaving differently, and should as such be named differently to better reflect the actual behaviour. I see this as just making debugging logic flow that much harder, with more branching decisions to try to figure out.

Comment by Nathan Derksen — October 18, 2007

Nathan, that is exactly what the article is about, you should read it.

Comment by Neil Roberts — October 18, 2007

Wierd, totally missed the link. Must have been tired. Fair enough, I see where the author is going with it. I still prefer to avoid even using type overloading, but the way the author shows it is at least a fairly clean approach. If only JS was strictly typed so an API could actually enforce data typing. I totally agree in any case that event handlers responding to browser events should only really contain the bare minimum of code, such as a simple call to trigger a custom event or a helper function.

Comment by Nathan Derksen — October 19, 2007

Leave a comment

You must be logged in to post a comment.