X-Men. On netflix.

November 15, 2011 Leave a comment

In other news, X-Men.

JACKPOT! AGAIN! And this time for something more/less nerdy:

X-MEN!!!! From your childhood. On netflix. So much win. I clicked play today and I was so impressed by how this show beats the shit out of every other kid’s show on the air right now. Voice acting is godly and the jokes are hilarious. WATCH EPISODE 1 NOW!

Non-Netflix Member Link: https://signup.netflix.com/movie/X-Men/70172490
Netflix Member Link: http://ca.movies.netflix.com/WiMovie/X-Men/70172490?trkid=4022489

For some strange reason, if you’re signed into netflix, you can’t view the non-netflix link and vice versa with the other link.



November 15, 2011 4 comments

Rome is awesome. Rome is a build server that is speedy and hooked up to a fat pype ( I was getting ~5.5MB/s downloading from github ( ! )  ). I got acquainted with Rome just tonight when I asked @humph for an account on it ( which he kindly obliged ❤ ). Long story short, complete make do nothing cycle in ~52 seconds; this is down from ~10 minutes on my supercomputer of a laptop. To get started, follow the following easy steps:

  1. Ask @humph to get you an account on Rome.
  2. Configure your environment to forward X11 to a local X11 server. On Linux, this should be as simple as passing -CX to ssh when connecting. On Windows, this is a bit more involved but boils down to the following steps:
    1. Get Xming
    2. Start Xming.
    3. Start PuTTy.
    4. In the options on the left, select Connection -> SSH -> X11
    5. On this window, click “Enable X11 forwarding” and in the box labeled “X display location” enter
    6. You probably want to save your configuration options. Switch back to the session options by selecting Session in the options on the left.
    7. Enter a reasonable name for this configuration, something like 01 – Rome should be fine.
    8. Enter rome.proximity.on.ca as the server address.
    9. Hit save and you should now be set to login and forward X11!
  3. Execute any application that you desire. In the case of our firefox build, you will probably want to call the debug firefox binary. This should be as simple as navigating to the top-level of your mozilla-central repo and calling:
    [whatever your build directory is]/dist/bin/firefox -ProfileManager -no-remote

    • If you already have a profile that you like to use on this machine for nightlies, you can substitute -ProfileManager on the commandline with -P [whatever the profile name is]
    • Obviously, you will need to build firefox first before you can call the binary 😉
  4. Sit back, relax and enjoy the lightning speed builds.

0.3 Coming Up Fast

November 14, 2011 Leave a comment

So the due date for 0.3 is coming up soon and I’m honestly just a tad bit worried… The project that I’ve picked up for 0.3 ( migrating XSS Me to web workers ) is proving to be too big for the time that I have left for 0.3 so I’m thinking that I’ll push most of the work that has to go into it to 0.4 with some preliminary work done for 0.3 but I’m wondering if that will be enough…

As for Paladin, unfortunately the guys are fairly busy so for now I’m just waiting till they have some time to take another look at the issue thread. Still, I’m confident that my questions will get some attention at some point and until then… I only have to wait.

Done and Done, Pt. 2

November 13, 2011 5 comments

Success! I’ve managed to alter the reported location of the cursor! Here’s what I did to get it to work:

The file that I had to modify was content/events/src/nsDOMMouseEvent.cpp, this is the implementation file of the DOM MouseEvent in firefox.
The updates were as follows:

// ... snip ... line 223

NS_METHOD nsDOMMouseEvent::GetScreenX(PRInt32* aScreenX)
  //*aScreenX = GetScreenPoint().x;
  *aScreenX = 17000;
  return NS_OK;

nsDOMMouseEvent::GetScreenY(PRInt32* aScreenY)
  //*aScreenY = GetScreenPoint().y;
  *aScreenY = 18000;
  return NS_OK;

NS_METHOD nsDOMMouseEvent::GetClientX(PRInt32* aClientX)
  //*aClientX = GetClientPoint().x;
  *aClientX = 7000;
  return NS_OK;

nsDOMMouseEvent::GetClientY(PRInt32* aClientY)
  //*aClientY = GetClientPoint().y;
  *aClientY = 8000;
  return NS_OK;

// ... /snip ...

As you can see, I forced the reported values for the cursor’s position to be 17000/18000 for the screenX/screenY members and 7000/8000 for the clientX/clientY members. This is a pretty naive way to do it but it proved to produce results. Check it out!

This is what happens now when a user attempts to right click anywhere in the page. As you can see, the context menu appears far off into the right bottom corner ( since the coordinates go off screen ).

Interestingly enough, the changes to the event class do not affect the native windows right click context menu ( can be triggered by right clicking in the same vertical space as the minimize/maximize/close buttons ), as can be seen here. Also not affected is text selection and clicking buttons/links and forms.

Also interesting is that no assertion errors are usually thrown. I have managed to provoke an assertion error however simply by middle clicking ( click the wheel of your mouse if you have a mouse wheel ) somewhere inside the page; the page being used should have sufficient text for the quick drag middle click popup to appear, for an example this page should work. Scroll around a bit if you don’t first succeed.

In conclusion, it seems that it is not immediately obvious how MouseEvent is used internally. On the one hand, the values reported through the event to the js engine are the only way for the js to figure out where the cursor is; so as far as the js is concerned, whatever we return here will be where the js believes the cursor to be. On the other hand, some internals seem to use the values returned from the getter functions of the mouse event object as well; this may make updating the values to our whims possibly difficult due to the sheer size of firefox’s internals and the fact that we don’t *exactly* know what will be affected by our meddling.

P.S. The MouseEvent class also defines some longs (screenX, screenY, clientX, clientY); from what I could tell, these are only updated during initialization:

// ... snip ... line 106

nsDOMMouseEvent::InitMouseEvent(const nsAString & aType, PRBool aCanBubble, PRBool aCancelable,
                                nsIDOMAbstractView *aView, PRInt32 aDetail, PRInt32 aScreenX, 
                                PRInt32 aScreenY, PRInt32 aClientX, PRInt32 aClientY, 
                                PRBool aCtrlKey, PRBool aAltKey, PRBool aShiftKey, 
                                PRBool aMetaKey, PRUint16 aButton, nsIDOMEventTarget *aRelatedTarget)
  nsresult rv = nsDOMUIEvent::InitUIEvent(aType, aCanBubble, aCancelable, aView, aDetail);

    case NS_MOUSE_EVENT:
    case NS_DRAG_EVENT:
       static_cast(mEvent)->relatedTarget = aRelatedTarget;
       static_cast(mEvent)->button = aButton;
       nsInputEvent* inputEvent = static_cast(mEvent);
       inputEvent->isControl = aCtrlKey;
       inputEvent->isAlt = aAltKey;
       inputEvent->isShift = aShiftKey;
       inputEvent->isMeta = aMetaKey;
       mClientPoint.x = aClientX;
       mClientPoint.y = aClientY;
       inputEvent->refPoint.x = aScreenX;
       inputEvent->refPoint.y = aScreenY;

       if (mEvent->eventStructType == NS_MOUSE_EVENT) {
         nsMouseEvent* mouseEvent = static_cast(mEvent);
         mouseEvent->clickCount = aDetail;

  return NS_OK;

// ... /snip ...

It may also be interesting to play around with these values; I have not touched these values for this update.

Done and Done, Pt. 1

November 12, 2011 2 comments

I set myself to two tasks, the first of which was getting something to appear in the Firefox logs. Taking a page out of David Humphrey’s in-class demonstration–

//// Connection

Connection::Connection(Service *aService,
                       int aFlags)
: sharedAsyncExecutionMutex("Connection::sharedAsyncExecutionMutex")
, sharedDBMutex("Connection::sharedDBMutex")
, threadOpenedOn(do_GetCurrentThread())
, mDBConn(nsnull)
, mAsyncExecutionThreadShuttingDown(false)
, mTransactionInProgress(PR_FALSE)
, mProgressHandler(nsnull)
, mFlags(aFlags)
, mStorageService(aService)
  // Hasan Edit
  fprintf( stderr, "Hello, Hasan. I like you and you are turning me on." );


And the result:

Done and done! Now to report/abuse X/Y mouse values…

Building Firefox

November 2, 2011 Leave a comment

It’s building :O

Game Development for the Student Enthusiast/Entrepreneur and Why Threads Aren’t Always Great

November 1, 2011 Leave a comment

This is the story of one student’s quest to code up a massively parallel game engine. It all started about 6 months ago when my group-mates and I, one of which I now co-own a startup game company with, decided to write up a game engine for our PRJ666 project requirement. At the end of PRJ666, we had a fully functioning 3D game engine. Complete with physics, shaders, sound, input, etc… Our home-made engine possessed the absolute essentials for someone to be able to create a game. However, our engine did suffer from a few problems; the one I will be talking about today is performance.

I will explain how the engine operates in order to give you an idea of why we’re even running into a performance bottleneck.
The starting point is a highly serial and synchronous game engine. We constructed this engine on the backs of many open source back-end systems; systems like OGRE for graphics, ODE for physics, Lua for scripting, etc…

The idea was that a time keeping core and a list of objects, together with a number of specialized subsystems, could be made to represent a game world by continually updating each object at each tick. The picture above shows what a typical tick cycle looks like. Typically, each subsystem will iterate over all objects and update them as required then pass control onto the next subsystem. The Lua subsystem does a bit more work as it iterates over each object in the object list and calls the object’s update function, passing it the amount of time that had passed.

This architecture proved to work but it suffered from heavy performance issues, mostly during the execution of the Lua subsystem.

After butting heads for a while with my partner and a quick consultation with my father, we came up with what we thought to be the perfect solution: a massively parallelized architecture. The goal here was not just to be subsystem-parallel but to be object-parallel as well. That meant that all objects in the world would be updating in parallel and all subsystems would be updating in parallel, although all subsystem updates (except for scripting/Lua) had to happen before any object updates were started. The idea was that by dividing each and every update step into discrete read and write steps, we could run many things in parallel and gain a massive performance boost.

To assess this architecture, we decided to ask for some good-old-fashioned academic review. In this case, the victims were David Humphrey and Chris Szalwinski; my “Open Source Development” and “Game Engine Design” professors, respectively. Having lured them in with the promise of coffee and donuts, they were going to give me feedback on this proposed architecture.

My meeting with them was today. Walking into the meeting, I honestly had no idea where to start. By the end of the meeting, I had learned quite a lot about threading and about where some of the problems of our current system lie. Here’s a summary:

  1. Parallelism breaks down if the unit of work is too small; in this case, an object’s update function is far too small to offset the costs of thread synchronization.
  2. Cause and effect become ambiguous as object updates lose their sequential nature.
  3. The number of bugs and the complexity of bugs dramatically increase as micro-threading issues enter the fray.
  4. Threads are a band-aid solution that should be used with caution.

Before anyone jumps on my head for implying that threads may not always be the best idea, let me elaborate. The first concern is perfectly legitimate and would take sound design decisions to avoid; for young developers such as ourselves, going down this route so early is premature.

The second concern is interesting as real-world philosophical problems have suddenly jump into the synthetic game world when objects begin updating in parallel, even if only the read steps are parallel. On its own, this is not so bad; it simply implies that our programming practices would have to adapt to this new environment with its set of constraints. Mixed with the third concern however, this is a recipe for disaster.

The third concern is of course, heisenbugs. Everyone knows that threading is sometimes tricky, and that’s on large systems with obvious and static threading routines. On a system like ours, where all objects thread dynamically ( assign parallel tasks to a dispatcher on the fly ) and interact in non-obvious ways, this is a nightmare.

Really it was the fourth concern which finally turned me away from going down the threads route so quickly. The reason why threads should be considered a band-aid solution is because we are using hardware potential to solve a design problem. The design problem in this case is that the system does not update efficiently. The reason for this is that all objects are treated as equals. Due to this, it is impossible for the system to cull objects from an update cycle. This is the most basic and fundamental problem of this system.

The solution then is not to treat all objects as equals. One way to gain the ability to filter objects is by using context scopes to isolate and group groups of related objects. These structures can then be used to traverse trees of relevance. This really is the key point; the system must be able to detect the relevance of a given object and to filter it from some or all update activities given the current game state. A key tenet here is that objects in one context should not be able to directly change objects in parent or sibling contexts.

Furthermore, this divergence of contexts can later be threaded where appropriate and the performance boosts will then be obvious and controlled.

In order to break large lists of objects into contexts however, some concrete data must be used to govern the design choices. To that end, I will re-iterate what many big businesses already know: automated testing is king. A test harness full of performance and integration tests, coupled with a fixed time demo that puts the engine through some typical scenarios while monitoring performance, can be used to generate graph after graph of data. This data can be used to find where the system is behaving inefficiently and influence later design decisions. This also made the need for some kind of automated build system obvious.

The benefits of a system that could build the codebase given a particular changeset target, automatically run tests against it, collect test data and then publish the data somewhere should be obvious. Such a system could be used on a daily basis by developers to gauge the performance of their code on the fly. This is crucial as this way, it becomes clear what broke the system down, in which way, when and to what effect.

With this in mind, I went to the library, took out some books on simulating the natural processes of the world on a computer and set to reading. Today was quite useful and inspirational and I had hoped to share as much of it as possible with others. I hope this hasn’t been *too* boring of a read and wish you luck on your journey in software development.