Saturday, December 18, 2010

Minimizing Editing

Admittedly, I have a lot of writing to do these days.  My 2 week vacation is dedicated to writing.  While I write, sometimes I pause and ponder on my chosen methodology.

My writing methodology comes from high-school, and hasn't changed much since then.  I recall that writing a text for school simply consisted of an introduction, body, and a conclusion.  Sufficient for most essays.  Longer research didn't suffer too much from my long-winded explanations.  Despite my passing with good grades; in retrospect something doesn't seem right.

Before I continue -- I think it's best that we agree on one thing: everyone gets the same work done slightly differently.  With that said, the "write and revise until final" doesn't work well with me.  I do enjoy writing and seeing where that leads me; however that does not cut it for larger, more complex works.

So, let's step back a bit...  the research method I was taught was to gather facts, mix them up, and then sort them afterwards.  The facts should be on pieces of paper; with as few words to avoid any plagiarism.    (all papers should be kept as evidence that the work is original).  Once pieced; the ideas could be transformed into paragraphs, and then refined.  A very good way to work.

Larger documents are essentially smaller documents strung together with a common narrative.  That escaped me for the longest time.

What also escaped me was the importance of the order of ideas.  For smaller documents, this is trivial.  For larger documents; things become tricky.  The problem, I believe (which may change as I become wiser -- hopefully I become wiser as I grow older), is that thought is generally non-linear.  Ideas must be strung together to form a linear narrative.

Let's consider a report on some physical phenomena.  I dare not talk about stories as they have a natural temporal narrative that should be followed (except in certain rare cases).  Most physical phenomena have some sort of intuitive description, a mathematical model (with dependencies), an explanation (with dependencies), and some observations.

The last 3 are tightly intertwined.  We could follow the above outline.  Each section would start with some pre-requisite knowledge followed by an actual description.

We could also start by looking at observations and explaining how they lead to an explanation and mathematical model.

Notice that the organization of information restricts us.  Once a path is chosen, deviating from it once some text is written is difficult (unless if you're a good editor, of course).

So here is my realization -- which may seem obvious to everyone else -- the final draft can be written in the first round.  How?  consider a computer program.  It must be written to perfection or else bugs appear.  We know software developers seldom get it right the first time; but there are ways to write a program which make bugs stick out and easier to cull (adding extra code to validate the internal workings of the program).

I believe that writing work can be shifted almost entirely to the phase of dealing with abstract ideas.  If we carefully string the ideas -- work out metaphors to express complex ideas -- before writing begins then we are in better shape.  Essentially, the notes are organized.  Words are added to string them together following a global plan describing the flow.  Then the result is tweaked and a completed document arises.

Certain patterns taught in school naturally arise if this is done correctly.

To prove (not in the strictest sense) this point, I'll give the following argument:  certain arguments are complex.  Properly organized, they'll introduce content slowly.  Pre-requisite information appears before it is used.  The complex argument is isolated from any information so that it may be presented without extraneous information.  Definitions and the such therefore logically belong at the front.  If we keep on progressing for what's easier for the reader we'll get an introduction, some facts, a body, and a conclusion.  Don't believe me?  try for yourselves.  Get a different answer?  I'd be interested in knowing what it is.

Saturday, December 11, 2010

A Basic State Machine for Games

Upon coding, I've stumbled across a different method to represent state machines in games compared to what I usually code.

Before I go into the details; what is a state machine you ask?  Imagine an application, a game if you will.  A hypothetical game starts in a main menu, then it transitions to a play-game state, then it can either transition to a score-screen or game-over state.  The score-screen will go to the play-game state (on another level we suppose), and the game-over state back to the main menu.

We can also apply states to sprites.  They can be walking, running, standing still.  They can also be healthy, or weak.  Each of these has an appropriate visual representation.  AI in games (if we can call it AI) is usually represented as a series of states.

More generally, we tend to represent states as a directed (potentially cyclic) graph.  However, I believe this view to be overly simplistic and problematic (disclaimer, I am not the originator of this viewpoint.  I simply agree with it.).  Also; we have the compute resources on most platforms to accept such a view.

I've found it convenient to redefine a "state" as a value between 0 and 1.  Of course, this complicates the programming, but it does have many benefits.  0 is disabled, 1 is enabled.  below 0 is less than enabled, above 1 is saturated -- extremely enabled.  The latter two, albeit they may seem redundant, appeal to creative types.

I'll confess: there are more complex state systems described in the literature that will allow for more flexibility than what I describe here.  Here, I describe what I found to be suitable for a game -- your needs (even if for a game) may differ when compared to mine.

Now, a tree structure.  The root state is always enabled at 1.  The root state handles transitions among the child states (through some pre-determined means of communication).  Here's where things get interesting:  all the sub-states at the root can be enabled.  Things can be half-enabled.

First implication: the layering among sub-states becomes important.  Things can be drawn on top of each other.

Second implication: the score-screen can become its own state, and hover above the game itself.  It does not need to be a substate of the game.  (this depends upon the game and how things are set up).  As is the pause screen.  Yes, these are simplistic examples - they could be done trivially using regular states as described at first.

Third implication: transition among states can be programmed as enabling/disabling a state.  Moving the enabled value between 0 and 1.  This provides an elegant way to describe the transitioning in of the pause menu.  It also provides a means independent of the animation to represent the time needed to get into the menu.  (all transition animation speeds should be easily tuned)

States in a tree (where the data structures are always in memory with manual dumping to disk when memory is low) is practical.  One; we maximize the use of memory (of course this is not perfect in all situations - and in some cases it may be counter-productive).  Two; dumping all elements of the tree back to disk is... trivial!  If done correctly, of course!

Why dump to disk?  Games should attempt to restore what they were doing last.  Skip the main menu - go right to the game - into the exact menu that the player was viewing.  Transitions should resume; as though the game was never turned off.

Why even go into that amount of detail?  I'm not detailing something for super-computers.  I'm thinking about portable devices.  Devices such as iPods where running a game in the background is silly.  The game should free resources and let the foreground application run with a lot of CPU power.

Shouldn't the OS manage all these details?  Isn't it silly for an application to worry about such trivialities?   Yes - the OS could manage these details with the help of the application (some apps do really need to work in the background while others could be swapped to disk).  If an application is coded to worry about such trivialities, then it can do a more optimal job than the OS.  For example, if data was loaded from disk with a bit of post-processing (instead of mmap'ed), then the OS has no clue that these pages can be released without dumping to disk.

Any other advantage to such an odd state machine?  I'll argue that it better represents the transition between walking, running, and standing still - but that might be more of a headache.

Something more genuine as a use might be the HUD elements.

And then - to be completely honest - save to disk can be done using a standard state system.  Just reflection would be required to make it much easier.  The transitions can be separate states.

In the end - whatever best suits the application (or game) at hand should be used.  Whatever gets the correct results faster in most cases.

Monday, November 15, 2010

Abusing C++ Syntax for GL State Management

One of my little side projects is writing a light graphics library that I can better express my visual intentions while avoiding the common pipeline of graphics from artist + magic glue = something that sorta works.  There's been already plenty of work on animating humans more realistically (for properly rigged meshes).

Anyhow - that's beyond the point of this post.  When dealing with OpenGL I have to explicitly manage the state - many points enabling or disabling a state within a rendering function and returning to the previous state once out of the block of code.

What automatically gets called at the end of a block of code?  A destructor of course!  So, I ended up creating myself a neat little object that sets the state of the GL but reverts it at the end of a block.  It has cleaned up my code quite a bit as I'm can now worry about how the state is inherited from the calling function rather than how the calling function permanently alters the state.

To avoid uselessly calling GL functions, I further optimized the code by keeping track of what the internal state is and what state is desired for the next rendering call.  This prevents useless calls to the GL...

But, back to abuse of syntax; I like the idea of creating my own objects whose entire point is to abuse the scoping rules of C++ objects.  So, in the back of my mind, I've been mulling the idea of using blocks and destructors to denote the equivalent of a glBegin and glEnd (being a bad programmer, I've reintroduced immediate mode on top of OpenGL ES)...

This object would allow me to do something like:
{
  BeginDrawing drawObj(GL_TRIANGLE_STRIP);
  drawObj.vertex(....)
  //implicit end of drawing + submission to the GL
}

Seem messy?  I'll argue it's the lesser of two evils.  The compiler will now explicitly tell you if a block isn't ended.  I've even done odder things:
{
   RenderToTarget t(myFrameBuffer);
   DisableBlendFunc s;
   //Rendering calls ...
   //Implicit returning of previous state of GL_BLEND enable option...
   //Implicit return to previous frame buffer...
}

Isn't that elegant?  I sure think so!


Update on February 24th, 2015 (fixed some wording, added the following text):

Time flies, and there are a few additions that I should add to this document thanks to C++11.  We can further simplify and reduce errors thanks to the changes to the language.

Consider the render to target example rewritten using C++11 lambdas:
RenderToTarget(myFrameBuffer, []
{
   // Set up your state.
   // Do some awesome rendering to your frame buffer.
   // More stuff!
});

Now, RenderToTarget is a function rather than a class, and it requires an explicit scoped set of operations.  The constructor and destructor are no longer abused, rather RenderToTarget can now do some setup, invoke the lambda, and tear down -- which can include restoring of state.

For general state, I'm hesitant to recommend lambdas as the number of nested blocks would explode thus hindering legibility.  Using objects (as described before) would work.  I'm more partial towards the following:
StateBlock([]
{
   // Set up state...
   // Render! Render! Render!
   // More stuff!
});

Well, now RenderToTarget is just a special case of StateBlock.  And StateBlock allows setting up arbitrary states, can be nested as needed, and at the end of the block undoes any state changes done within.

Obviously, for this to work, the simplified API must provide its own state altering calls, akin to the objects previously described.  And avoiding using scoped objects means that we don't have the odd issue of various dummy variables laying around simply to have their destructor get called at the end of the block.

Sunday, October 10, 2010

StarCraft II: Terran vs. Hard Terran AI Strategy

My experiments with StarCraft II have revealed a weakness in the hard Terran AI in a custom game.  Simply put, a sufficiently sized army of marines can take out the AI...  Let's look at the specifics, shall we?

First, send every SCV mining the mineral field.  Start training SCVs so that you have exactly 11.  Don't train more, or else 50 minerals will be set aside for the unit - and you want those 50 minerals.

Ok, got 10?  One more is being trained?  Good.  Take an SCV that is mining and get it to build a Supply Depot, and hold shift and right-click the mineral field after giving the order to build the Supply Depot.  This ensures the SCV doesn't need any micromanagement and will automatically returning mining once it's done building.

Have enough for a Vespene Gas Refinery?  Build one!  You'll need it.  And start training 3 more SCVs, sending them to the Vespene Geyser.  You only need one, we'll be more bound to minerals.

As the number of collected minerals increase, keep on building SCVs to collect even more minerals (about 2 per mineral field is ideal with an extra or two to use for building).  At the earliest possible moment, build a barracks, a bunker, and another barracks.  You want two barracks, but need a bunker.  Make sure the entrances to your base have at least one bunker.  Build a reactor on each barrack.

Now, you can produce 4 marines in parallel.  Start building an army.  First fill the bunkers, then just build more marines, Supply Depots, and SCVs...

While all of that is happening, you'll want to build an Orbital Command, an Engineering Bay, a Sensor Tower, and a Factory (with attached reactor).  In that order.  The orbital command allows you to upgrade the Supply Depots for more units (without spending precious minerals that could go towards building more marines) and quickly collect minerals using harvesters...  Also, spying on the enemy is always a good idea.

Ideally, 5 minutes total have passed.  The enemy should be preparing an attack with marines and marauders.  Keep the bunker safe, and away you go!

Try to minimize the casualties on the first attack by the AI - then immediately counter-attack if you can have about 10 to 20 marines.  You might also have a few Hellions by now too.

The strategy essentially boils down to upgrading marines and taking advantage of the ability to produce 4 marines and 2 hellions rather quickly to send into the battle field.  It works against the hard Terran AI, it might also work against Zerg and Protoss.  You also have to admit that you'll lose many units, and that ideally you're on the offensive, and not stuck protecting your base.  Being defensive won't help if the AI progresses too far ahead along the tech tree...  You might want some Medivacs - but I rather enjoy the suicide trips.

The last thing that I do is have an SCV build turrets (just in case) and additional Command Centres.  These Command Centres I like to fly off to rarely checked islands, or nearby mineral fields.  Once landed, transform them into a Planetary Fortress and start mining minerals.  Mine as many as you can!

You'll also be more effective if you focus the fire-power of the marines on specific targets rather than letting them attack and defend what they want (which is, for our purposes, good enough).

Particles on iPad : part IV

I should note that I'm not doing standard particles (with emitters, colour change, limited life span, etc.) as described by Reeves in 1983 - for that I suggest reading [Reeves83].

What I'm looking at is stably modelling, with a decent frame-rate, massive system of interacting small spherical objects - which I loosely call particles.  My initial attempt was to use smoothed particle hydrodynamics in order to imbue the masses of particles with the appearance of fluid flow (see [Monaghan92]).  That didn't work as I'd had hoped.  So I went to modelling particles using a finite element method.

The issue with the finite element method is best explained with a column of a dozen particles high falling and colliding with the ground.  Assume the column is perfectly aligned so that these spherical objects do not slide to the side.  Consider the time of impact.  On the first frame, the bottom particle will hit the ground and a repulsive force will be applied.  On the second frame that repulsive force will be applied to that particle and the one above it.  However the particle above it still has quite a bit of speed and due to the collision is applying a bit of a downward force.  In the end, it can take a few hundred frames before the system starts to show a stable result.  In the meantime the user is presented with particles unrealistically entering each other's physical space.

That led me to search for solutions that would simultaneously solve for all the collisions - some sort of magic way to prevent the visual oddities while the system stabilizes itself.  A way to more quickly reach a stable state if you prefer.

So I ended up writing the equations that solved for the collisions and see how to solve them as a single set of equations; not treating each individually.  The obvious method, I believed, was to write it out as a matrix.  Then, inverting and solving the system would lead to the desired solution (of course, I'd be forced to use a relaxation scheme...).

Then came the issue:  let's say I'm solving for "k" - a value indicating how far a particle should move.  I have at least 4 equations (one per colliding particle) used to determine each "k".  I could solve for "k" locally for each equation easily - however which equation would give me the "better" answer so that I could obtain a solution through relaxation?  Averaging the results should be satisfactory...

However, what am I computing?  A form of pressure.  Pressure accumulates where there are plenty of particles.  Taking the gradient of the pressure would give me a force pointing towards the best direction for the particles to avoid collisions.  A bit of doodling on paper revealed that pressure would be a function of density (how close particles are to each other).

Doesn't this sound awfully familiar?  Yes - a fluid simulation using smoothed particle hydrodynamics would do exactly that.  I've just started to re-invent the wheel.  So; let's stop reinventing and see what went wrong.

My particles were overly compressing each other.  As though they were not being forced apart - as though pressure was never strong enough to repel them.  Another observation is that the system would not stabilize...

Now; let's take another step back.  Let's consider pressure in more detail.  Pressure in an incompressible system to be precise.  Pressure should build-up as the system forces itself into a corner / against an obstacle.  That is, pressure is not local to a small set of particles.  Specifically, the force of pressure will want to minimize pressure.

Imagine a local system, whose gradient points downward to a set of particles with little/no pressure in comparison.  Now these particles form the liquid in a glass of water.  These particles below are tightly packed, so they should equally repel - not be the destination for - a new particle.

So I'm going to review the equations, better plan how I'll model the particles on paper.  Then implement a new version in the coming weeks.  Maybe a few posts on my thoughts may appear in the meanwhile...

Saturday, October 9, 2010

How Could I Live Without Property Lists?

I've started to use property lists.  At first I avoided them since I thought they were daunting, but they are the easiest thing in the world to use!

What's the first thing we should do?  First, let's get the path to our property list from the main bundle:

NSBundle *mb = [NSBundle mainBundle];
NSString *dataPath = [mb pathForResource:@"MyPropertyListFile" ofType:@"plist"];



Second, for good measure, we should make sure that the object was found:

if (dataPath == nil)
NSLog(@"Unable to find MyPropertyListFile p.plist");


Ok, you'd want to do more than output a simple line of text.   Next, we want to open the file.  I recommend mapping the file to memory - so if the system gets low on resources it can the memory used by the contents of the property list file:

NSData *data = [NSData dataWithContentsOfMappedFile:dataPath];


Next; let's get the content found within our property list:

NSDictionary *Plist
= [NSPropertyListSerialization propertyListFromData:data
mutabilityOption:0
format:NULL
errorDescription:nil];

Ok.  That's it!  The contents of the property list file is within the dictionary.  To be honest, the method I'm using will be deprecated soon - but it works in iOS 3.x where the newer method only work in iOS 4.x.  Nonetheless, they both do something very similar.

Now, using a property list editor (either XCode or the one in the Utilities folder within the Applications folder) - edit your property lists as you please.  Dictionary types allow you to assign string keys to values (of varying type).  Arrays give you a list.  The others are scalar types.

What's great is that each maps to their underlying NSObject counterparts.  So a dictionary in a property list becomes an NSDictionary.

I'm aware that property lists can act as data sources and do much more complicated work.  However, they make for very nifty configuration files.  With less than a dozen lines of code you can open a property list and extract data from it.

With external configuration files so easy to set up; there is little excuse to hard code values that might change later on.

<minirant>
I used to write my own parsers to load configurations from external files.  Then I started to dump them in header files.  However, property lists are a much more elegant solution as they don't need a recompile.  Data can simply be queried from a dictionary root object...
</minirant>

Friday, October 8, 2010

Particles on iPad : Part III

I've worked out a few stability issues.  The iPod touch, first generation, seems to handle about 200 particles on screen (each colliding and exerting forces on each other) at about 15fps.  Not ideal...  I'm going to delve back into the math books and learn until I can implement a better integration scheme.

I have fixed the stability issues.  The force repelling the particles acted like a one-way spring.  That is, it just ejected stuff but did not apply forces to bring them back towards the particle (well, sphere).  My springs now are the same equation; but they have an area of effect equal to the radius of the particle.  This ensures that the maximum force that is applied to bring the particles together does not exceed the force that is used to repel particles.

Another issue was that my clamping of velocities to satisfy the stability criterion was done after the velocity was applied to the positions.  This meant that particles could travel further than what was safe (in other words, the distance it could travel before the simulation would oscillate to explosion).

Lastly, even all of this worked very nicely; I got the same jumbling I got with the SPH method, but a bit better.  The problem was that as particles were pushed down (say due to gravity), they would compress temporarily as the system (indirectly) resolved the pressure each frame.  Ideally, I'd want zero compression - or zero divergence.

To fix this last issue; I believe I need to brush up on the underlying mathematics.  Namely linear algebra to get a better hold of the concept of eigenvectors and eigenvalues in order to work out implicit methods to numerical integration.

The other problem is speed.  At 200 interacting particles, the iPod isn't a speed daemon.  It's rather slow.

In the end; because of speed and math - I'm shelving my particle idea for a year or so for hardware to improve and my knowledge of mathematics to improve.

In the meanwhile; working within my current limits, I'm mentally playing with a means of gameplay that would take advantage of the multi-touch screens.  For iPods/iPads I'm very interested in what interactions I can achieve which were impossible or impractical using regular game controllers, mice, and keyboards.

Particles and the iPad: Part II

I haven't given up on particles - and intend to get a system working where the world is composed of small particles that interact with each other.  My latest experiment consists of a simple discrete element method.  The method works perfectly for collisions; however as I try to introduce springs between particles the model starts to explode.

The reason is that the force from the spring is quite big and amplifies the forces from the other particles.  The simple solution would be more time steps; however I believe that there must have a better way to do this.

My next experiment will involve altering the repulsive and spring forces.  First; they will not vary as much based upon distance.  That is, the force will not go to infinity as two particles approach the limit of overlapping.  Similarly, for my springs, the force will appear like a sine-wave.  It will be very strong for a short radius and gracefully diminish over distance, sort of like a magnet.

The former strategy will help reduce the apparent visuals from explosions by preventing particles from exiting a collision with massive force.  However, without the springs this section is perfectly stable; which makes me believe I shouldn't play with it...

The latter will provide a smoother function for springs.  It's more for visual appeal than stability.  For stability, though, I will need to rework the spring algorithms so that they do not exert too much force.

A final note; I might re-introduce pressure calculations a la smoothed particle hydrodynamics.  It will provide more realistic motion for my particles which are currently synthetically confined to the plane.  Actually, first, I'll get them off the plane by adding a bit of random jitter to their initial position.  The shear force should give the effect that I'm looking for...

Anyhow; back to the drawing board...

Monday, October 4, 2010

SPH on iPad

I've just implemented my first SPH solver.  It's quite stable; and it runs at a decent speed.  I'm happy that it works; but haven't gotten to do much else this weekend.

My current solver is literally what Muller described in Particle-Based Fluid Simulation for Interactive Applications.  I don't want to go experimental on the first version, simply I'm searching for a feel on how the simulation reacts to stimuli; and what it might be useful for.

First, this is not behaving the way I expected.  The method, for lack of better words, uses a set of particles to approximate what occurs within the space.  I was expecting it to behave more like a massive cluster of discrete particles.

My issue is that density is computed based on the effect a particle has on other particles.

My interest in particle-based simulations is finite particles.  Essentially a ball-pit.  Cheaper-to-compute particles but more of them.  Essentially, massive atoms whose complex interactions yield fluid flow.  This experiment led me to a few conclusions which will dictate my next experiment.

First; pressure for incompressible flow - I believe - is very similar as to how balls in a ball-pit function.  As such, I'd look at the laws of rigid-body motion to do the pressure.  Namely, the forces arising due to collisions.  If no ball can enter the area of another ball, I've done it right.  But to make sure that human error wasn't the cause; I'll compare Muller's equations to those of rigid body motion to see how they differ.

Second; viscosity is a force that arises from interactions of nearby particles.  The motion should rub off.  For that, I'd use a simple area of effect - that is nearby (colliding) atoms have a given velocity, whose average is weighted and blended with the current atom's velocity.  The reason for limiting the area of effect to colliding particles is to ensure there is no "space" between atoms.  I'm aiming for a very fine approximation of fluid flow; not one that is coarse.

Third; I'm interested in non-uniform matter.  That is, each atom is not just water or air; but half could be water and half could be air.  The last thing I'd add to the system are springs.  Implicit springs to neighbours based upon stickiness factors.  I want goop.  Semi-solids.

Last; there are one-time springs - used for solids.  A coarse representation of glue.

Finally.  As I said, I should review the equations that I've implemented.  See and better understand how they compare to the equations for rigid-body motion before I build the next prototype.

Why SPH on iPad?  Doesn't it seem ridiculous?  Consider a video-game.  Right now we are stuck in a rut where things are coded for a purpose...  Wouldn't it be great to be able to paint properties of materials and let the game engine do the rest?  Clothes and fluids should be intrinsically the same material.

Or so I believe for now.  I'll comment on the next prototype...

Sunday, October 3, 2010

Meaning of Life

Often times I begin to wonder, like most people on this planet, what the purpose of life is.  Some, sure of their answer, say that it is to reproduce.

I'd argue otherwise, using a few simple concepts:

Reversibility: Anything that we do, we could imagine it potentially occurring in reverse.  For example, drop a vase.  The vase shatters.  The shattering of the vase, the final resting position of the fragments, depend upon how the vase was held when it dropped.  If we were to reverse time, then the vase would re-construct itself.

Going backwards assumes that from time t+1 we can reach time t without any pre-disposed information.  The broken vase, again, could only reconstruct itself if we were also present in the mix of dropping, that the displaced air required something to take it's place...  The whole complex system allows this.

Determinism:  Since we can, if we were deities, extrapolate what would happen in the future and learn what happened in the past - there must be only one way forward and one way back in time (according to our world view).

More convincingly, determinism allows us to make choices and be able to accurately deduce the consequences.  It allows us to make sense of this world.

Order:  We like things that are organized.  Nature is very well organized.  We seem to self-organize for gain as we see fit.  We constantly further try to organize this world; be it by a system of roads (the visible) or dividing spaces...

Conclusion:  Briefly; I think that we are seeing the forest for the trees in terms of our purpose (or the meaning of life if you prefer).  If we reproduce, we are just another cog in the wheel that keeps on pushing evolution forward.  But given the overly predictable nature of our world; something is amiss.

Consider the solution to a dozen mathematical equations.  They can be solved by putting them into matrices, and solving.  There are even means of solving the equations indirectly to obtain an answer through multiple iterations.

I think we're in this system that is searching for a final, stable, phase.  That the ultimate meaning of life is the final stable solution of everything.  Once chaos ends.

And by the time we figure it out, it will be too late.  We will have already served our purpose.

Friday, September 24, 2010

Rushing Medium AI in Starcraft II

So here I am, having fun, wondering how to quickly end a match versus the AI.  Against a medium AI to be precise.  Well; it doesn't require much effort:

Terran:
Build around 3 SCV's per mineral field and 3 per Vespene Geyser.  While that is happening, build 3 barracks (each upgraded to build 2 marines at once) and sources of food (upgraded through the upgraded command center).  Now; build marines.  That means 6 marines produced simultaneously once everything is set up.  Send them into the opponents base.  They won't know what hit them.

Zerg:
Classic zerg rush.  Focus drones on harvesting minerals only (about 3 per mineral field), and build as many zerglings as possible using the Hive Queen to augment the number or larvae.  Don't forget to build overlords.  After the first attack, or when you have a good 20+ zerglings, send them off into the enemy base.  Game over, you win!

Protoss:
Rush with Zealots.  Build 3 zealot-producing buildings.  Focus all effort on minerals.  Build an expansion if you can!  Send into the enemy base about 20 zealots - watch them die off rather quickly...

I think I need to play on hard mode now...

Sunday, September 19, 2010

Starcraft II: First Impression of All Races

I've spent some quality time with Starcraft II today; and have a few notes on the individual races when playing against medium AIs.

Terran
The easiest to use - keep on upgrading weapons and units.  Make sure there is a constant cash flow and all will go well.  Build bunkers at choke points, a few towers to detect invisible units, missile turrets, and then it's just finding out how to destroy your enemy.  I like siege tanks.

Zerg
The Zerg have the early-on advantage.  Get to the Hydralisks as fast as possible and move on to Infestors when possible.  The infestors can remain hidden underneath the ground all the time and spit out infested marines.  It's an easy, disposable, army.  Focus on taking out any units that can detect the burrowed units.  Be sure to dedicate one Queen to your Hive to maximize the number of Larvae through which units are born.

In the second-half of the game, use the burrowing capability to pick out units from the end of the line.  Normally siege-tanks are behind so to not get damaged.  Use Zerglings on them.

I enjoy the Zerg force me to think up of bizarre attack strategies - that is rather than upfront powerful units, units creep up onto the enemy and destroy them in surprise attacks.

Protoss
The most powerful, however micromanagement is necessary.  Getting Carriers early can mean an early demise for opponents that have not prepared themselves for an aerial attack.

Sending a mass of Zerglings to attack some location on their own is usually a safe bet.  I found myself constantly watching battles and making sure units had shields, pulling weaker units out, etc.

Overall
Despite what I say here - the best way is to flank the AI.  That is, once the AI tries to attack the base, send in a heavy army into their base.  Raze a few buildings; and you're off to success!

The only problem I had was that the "VS. AI" button on the single player screen does not affect achievements, but playing versus the AI on the multiplayer screen gives me my achievements...  that's an interesting bug...

Also, randomly Battle.net decides to log me out and prevent me from getting an achievement once a level completes...  Quite annoying, especially when I've already spent a few hours...  fortunately the game is fun on its own.


Sunday, September 12, 2010

Starcraft II: A Zerg Strategy (medium, Zerg vs. Terran)

Today I started playing as Zerg against Terran on medium difficulty.  And I've found a very interesting strategy which the AI doesn't react to well against.

First, send your initial drones to the mineral field, and create up to 10 drones and those will go to the mineral field.

Now, build a second Overlord (send the Overlords somewhere "safe" either the corner of the map or over your hatchery).

Good - build 2 more drones, and send them mining.  You want a continuous stream of income!

Your next drones, make them relay somewhere safe, and build a Spawning Pool and an Extractor.  The Spawning Pool allows you to get Zerglings, the Extractor you need for Vespene Gas.

Get 3 Drones working on each of your geysers.  If you have 2, build 6 Drones, and another Extractor.

By now, the Spawning Pool must be built; build a Queen and upgrade your Hatchery to a Hive.  Your income ought to be stable enough by now to forget about resources for a bit (your units should be 13 Drones on minerals, 6 Drones maximum on Vespene gas, and one Queen).

Here's the bad news - all this time spent building up for resources left us a bit vulnerable.  Quickly, build some Zerglings (you may want to build them earlier).  2 suffice!  These two will venture and explore the map.

While you take in the bad news, build a Lair and an Evolution Chamber.  When building units, build 2 Roach and 1 Drone or 1 Roach 1, Drone and 1 Overlord.  The Zerglings are useless against Terran, we want Roaches to have a bit of defence, and we want the Zerglings to reveal the Terran base.

As you build units, make your Queen build Creep Tumors all over the place.  Preferably towards a choke point.  At the choke-point, build Spine Crawlers.  Expect to lose your Spine Crawlers.  Also - research Burrowing at the Hive.

Quick Recap: By now you should have a Hive, discovered the Terran base, or the location of their army (avoid it if you can).  Have a few (3 maybe?) roaches, plenty of defence, and your upgrade to Burrow is being worked on.  Ideally, you know where the Terran base is thanks to your Zerglings.  We know that the Terran are overpowering us.  

Quickly, get the Roaches out of your base in a spot outside of the Terran field of interest (that is - where they most likely won't be...) - ideally near the entrance of their base.  Burrow the roaches.

At the base, build (as soon as possible) a Hydralisk Den and build hydralisks.

Expect to be attacked.  When the Terrans attack - they will have sent their entire army.  Their base is now empty.  You have units waiting there...


Attack the base with your units.  If you can do enough damage - the Terrans will retreat and your base will be in good shape (minimal damage - except for the loss of the Queen and some Spine Crawlers.  If you manage to take out some SCVs, Supply Depots, etc. in the Terran base; they are done for.  While they head back to their base you have time to inflict quite a bit of damage.

You can sacrifice your Roaches.  By now, you should have plenty of Hydralisks.  With the enemy back at their base, bring the Hydralisks to their entrance, and Burrow.  Rebuild your minimal defences and try to get an expansion going.  With an expansion, the number of Hydralisks built will easily double.

Repeat the tactic of squatting in front of the Terran base and attack when your cheap defences get attacked.  Be sure to have a few Hydralisks and Spine Crawlers to keep thing from wanting to stay and get past the cheap defences.  Doing a lot of damage to their base also has the same effect!

And now, it's rinse-wash-repeat.

To recap: the overall strategy is to Burrow near the entrance of the base and to go in and destroy when the offence (also acting as defence against medium Terran difficulty) as busy destroying a cheap entrance to your base that can easily be rebuilt.  Without stopping, keep on sending reinforcements to your crew waiting to ambush the enemy.  Once they leave their base, keep your new units with you as defence.  Once they reach your base and start attacking, inflict as much damage as possible.


Happily - this works very well.  However, don't expect to win the first time.  The AI does make the mistake of leaving its base undefended - but that doesn't mean it is completely stupid.  The building must be done as fast as possible.  Go too slow and the first wave from the Terrans will destroy your base.

Is the "New Digg" that bad?

<rant>
Each time a new version of iTunes comes out; people line up to complain about the UI.  There's something about change that people fear and don't like.

Now that I've spent a bit more time with the "New Digg" - let me say it isn't that bad.  You can still submit links.  That erases most of the arguments that I had against it.  The crux of Digg is finding new stuff.

For the auto-submission?  I think it's a good thing.  Normally - if I wanted to submit something I wrote, I'd log into Digg, submit it, and see it remain ranked with 1 Digg (mine).  Now, what I do gets auto-submitted, and automatically gets a single Digg (mine, again).

For the enhanced algorithm?  Using multiple sources, but weighing the number of Diggs as having more of an effect.  Good idea, actually.  Ok, sit back and let's see how this works out: you want to find new stuff.  Stuff that a lot of people like.  Do you want a closed community of Diggers or a bigger community of people who may in the future be lured to Digg?  At the end of the day - I don't think this will make much of a difference.

For those quoting http://www.alexa.com/siteinfo/digg.com, let me say that is a considerable dip (in the month of August 2010, pageviews and time on site have nearly halved!).  Is this the end of the world?  Let's dig deeper.  Page views probably went down since articles don't seem change as often.  Can't complain there.  That would also describe why the time on the site went down.  And would explain why the reach has remained somewhat constant.  I call the evidence inconclusive that Digg shed most of it's users.

Lastly, for those complaining about not seeing how many Diggs a comment, just click your profile icon.

Yes, Digg did shed some users - there are the emotional bunch that have trouble whenever the iTunes UI changes...

Here's the only downside.  I lost my entire history.  Either it's in the process of being ported, or lost forever.  From reading the comments on Digg, I see things are probably being tweaked.

Like any new machine; Digg will take some time.  Play around with it, and you might even find that going forward it's going to be much better.  It looks more like a problem with deployment rather than a problem with the system.  If they slowly deployed it to certain users rather than the whole - things might have gone more smoothly.  A bit more communication, and maybe Reddit wouldn't have flooded the front page (if deployment of the new version took considerable time and people were complaining about real issues at the time - notifying the users might have been better).

But all in all.  It's not that bad.  And to say this, I forced myself to spend a bit of time to use it (rather than my previous version of this post that was completely inaccurate - a bit of research goes a long way - although let's say I still did no research for the fun of it!)...

Edit1: Mashable has a nicely researched piece at http://mashable.com/2010/09/15/what-digg-must-do-to-survive/
</rant>

Case for Objective-C++

Forget Objective-C. I mean it. For iOS development, the ideal language is Objective-C++.

I expect plenty of pitch-fork bearing purists who tend to align themselves with the crowd of goto is bad, and overloaded operators are bad to not like what I'm about to suggest. But; as with goto and overloaded operators, like certain medicines, too much can be fatal. I believe programmers can make reasonable decisions to the use of these constructs.

With Objective-C 2.0 came properties. These allowed for easy integration with key-value coding; but there's one problem with it - all properties are public. All methods are public as well. I believe this is bad programming style.

Here's a key to my solution; simple and elegant:
template<class T>
class Auto
{
private:
T m_o;
public:
inlineoperator=(T in_o)
{
if (in_o) [in_o retain];
if (m_o) [m_o release];
m_o = in_o;
return m_o;
}
inline T noRetain(T in_o)
{
if (m_o) [m_o release];
m_o = in_o;
return m_o;
}
inlineoperator()() const
{
return m_o;
}
inline Auto(T in_o = NULL)
: m_o(NULL)
{ (*this) = in_o; }
inline Auto(const TNS<T> &in_cpy)
: m_o(NULL)
{
(*this) = in_cpy();
}
inline ~Auto()
{
if (m_o) [m_o release];
}
};

Consider how this code nicely takes an an Objective-C object and wraps it in a smart-pointer.  Ensuring that objects get retained and released becomes much more trivial - and a lot less typing.  No need to define a parameter, synthesize, and double-check that it exists in the dealloc.  Why?  Objective-C++ automagically calls the destructor for C++ objects.

Using this is quite simple: Auto<NSString*> m_something; creates your string.  Assign an auto-release string to it (or use the noRetain method) and off you go forgetting about this object.  Assign a new string?  The old one gets automatically released.  Need to call a method?  [m_something() UTF8String]; is a good example.

The good thing?  minimal overhead - the compiler should inline out most, if not all, of the Auto object.  What you're left with are the retains and releases that you'd have normally put in.

A note of caution: only use this object in places where you'd normally ensure an object gets a retain or release.  Do not enthusiastically put it everywhere.  Recall: good things in great quantities might not be that good.  They can even be fatal!

On the up-side, if you want properties, you'll need to actually write the methods behind to translate from the C++ wrapper.  This is good, you'll see, since it makes it harder to make objects publicly accessible, always a good design decision.

Before I wrap up this post; for private methods, I suggest using C static functions in your implementation files.  Can't get more private than that!

For protected functions - useful with polymorphism - I'd avoid them.  Once a function is declared, it can easily be reflected and used.  No matter how hidden/private it is (unfortunately).  I believe that this poses interesting software design constraints though.

Sunday, August 8, 2010

Parallel Processing Predictions: Part II

Last we left off, I stuck to my theory that applications would remain sequential - and that parallelization would be done implicitly through interaction with the OS. I gave a hint to my thinking by specifying that there is a delay between creation of an object, and it's actual use.

Previous Work First, Amdahl's Law [Wikipedia has a good write-up on it]. You can't go faster than the slowest serial portion of your code. DrDobb's has an article on beating Amdahl's law[DrDobb's Article]. I disagree - speeding up the execution on a single processor still means your are bound by the serial portions of your code.

For your reference - I consider SIMDizing[Intel has a good intro to get the idea] to be parallelizing code; as well as reordering instructions to maximize the use of the underlying execution units[Apple's write-up is a bit dated, but I like it.].

So - you must be laughing. I should explain in this post a miraculous way to make all of today's code parallel. Well, most of today's code parallel -- as long as the OS incorporates these changes.

I will give credit to Intel's Thread Building Blocks and distant relative CILK. I shouldn't forget Grand Central Dispatch - if only for it's ability to do task-scheduling with a view of the entire system load.

Aren't these great solutions? Oh yes. Brilliant. Tackling issues of data locality and efficiently doing work across multiple cores. Essential given the future is parallel.

Then what's the problem? If they optimally express the intentions of the programmer, none what-so-ever. For me, just keeping track of what is running, and can be run is not a "good thing".

Inspiration: When I think of implicit parallel programming, I think OpenGL. Even though drawing requests are done in sequence. For example, you can draw plenty of geometry - but that geometry is just cached in a buffer and drawn when the GPU has time. So while the code is doing whatever logic it needs to do, OpenGL is working in the background getting stuff done. And it's not only the video-card that's running, but part of the API. States need not take effect until they are actually needed.

The other thing I'd like to point out is the intelligence of compilers - more exactly, how much optimizations they can do. I'm not suggesting the Octopiler or something similar.

I'm suggesting devising APIs that are designed from the ground up to do work in parallel. But also increasing the integration of the compiler within the process (I'm sure someone else thought of this, there are brilliant people in CS departments around the world. I'm thinking aloud since I enjoy it. It also helps me coherently determine how my code should evolve.)

To avoid the simplicity that is a graphics API, let's consider another object. Let's say an object that does set manipulations. We have intersections, unions, enumerations, and all the wonderful related dangers to parallelizing these operations. A quick google search will lead to results of people writing codes using atomic operations, and making mistakes using atomic operations. So, in this part I'll document how this API would be presented to the end-user and how it could be implemented.

First, I can feel the temptation to say that all objects are immutable - that all mutations happen over copies. However, that is - how to say this - cheating. Our goal is to not change existing code as much as possible.

The API: The end-user of the API can do whatever they're used to doing with scalar code. It's the API itself that must do the miracles.

The Inner Workings: Finally, the moment of truth, how would it work? Let's assume that we are building on Grand Central Dispatch; with cache coherency methods similar to those of Thread Building Blocks (hybridized - these would do a great team for the next few years).

The general strategy is that the API has a series of sequential tasks. These tasks can run in parallel (even later) until the user actually makes a request for data. This leaves us with a few cases - the major ones are Immutable and Mutable.

Ok, Immutable objects are the easiest. We will add a state to the objects: "Pending". An Immutable object is in a "Pending" state if it hasn't completed initialization. We will also add a sequential list of tasks that can run asynchronously.

The entire API's logic is run on this asynchronous task list. Each task is sequentially numbered by the sequential portion of the API.

The easiest methods are those that return a new object. That might be a list, or something else. As long it's all part of the same API; it doesn't matter. This new object, Immutable or not, is not really an object, but more like an IOU. It holds the ID of the task and is put into a "pending" state. The same is true for existing objects stored within the data structure.

The next easiest method is functions that return basic data types. These could be defined to return a special wrapping object; so they'd behave like their object counter-parts. I'd suggest making the compiler hide these gory details, and use decent heuristics to determine if the overhead of creating an object is worth it.

But back to those returning basic data types. We'd get an ID of a task; but would stall the execution of the sequential code until the data was received.

Wouldn't we always be stalling? Here's the beauty of the solution - we know what work we must do sequentially. We can also do dependency checks. If one operation doesn't depend on another, it can be scheduled to run in parallel. Even; individual tasks could be made to run in parallel (like the intersection of two sets).

Next are the Mutable objects. Surprisingly, these aren't that different. Really.

What I've just done is outline how to parallelize a sequential application by properly engineering the backend libraries. This parallelization is completely transparent to the end-user of the API.

This basic solution might be enough to divorce an application's serial logic from the immensely parallel nature of the underlying computational hardware.

Sunday, August 1, 2010

Parallel Processing Predictions: Part I

I've been thinking of parallel processing as of late. I've coded a task scheduler with a few swappable algorithms in the backend to test performance; seen my programs degrade in performance as the number of cores increase; and literally learn everything the "oops I made a mistake" way.

So, standing upon my pile of errors; I'm going to attempt to predict what will happen (some of it very long term). Some of it we already know - hardware will make a massive shift. Others; well - let's see if I'm right about any in the next decade.

No Change For Most: 99% of applications will remain sequential in nature and never directly deal with threads. This sounds crazy; I know given this massive push towards multi-core - but it's an unshakable conclusion in my mind.

Rather, libraries will be parallel and asynchronous - making "DIY" algorithms an even sillier affair. As a simple example: consider comparing two strings. There is a time since the request to compare the strings is sent, and when the value is actually used. If we use something like Grand Central where the threads are already spawned and just need to be used, then we're talking low-overhead implicit parallelization.

Essentially, libraries will present a sequential interface. User-land code will not need to know about parallelism - but will take advantage of the hardware when the programmer leverages the underlying frameworks.

Right about now, some people probably want to remind me that the GUI runs in the main thread whereas heavy processing is normally put into a secondary thread with mutexes to coordinate the two. I'll recall olde code that actually split up the processing so that the GUI event-pump could run. Even, olde code that would, at explicit times, yield execution to the GUI. Just yield when data is available to update the GUI, much less messy than having locks in obscure places. Normally, for such things, I have one lock that synchronizes events from the GUI on the separate thread with my work thread. The more sequential my code, the easier it is to work with.

Procedural Over Functional: Again, the Haskell fans will most likely disagree. Actually, I'd rather code parallel code in Haskell at times... But; let's not get distracted - I believe procedural code will maintain it's reign. The reason is simplicity. Everyone understands a TO-DO list, a sequence of instructions, manuals, etc. No manual I know asks you to trace to the desired result to get to the start of the solution (sometimes I feel like I'm rigging code to have side-effects to get it to run in functional languages).

Even if languages like Haskell are "better"; the rich and parallel underlying libraries of the system will make the entire thing moot.

Software Diets: Back-end system libraries will overthrow software complexity. That is, the OS will provide so much functionality that an application will simply have to connect the pieces. Some specialization might be needed; but very little.

This availability of features in the OS will change the software landscape. Software that does not properly integrate into the OS (that will have a global strategy for multiprocessing, with libraries optimized for that strategy) will slow down.

Think of it this way: the OS and it's libraries will shift together. And as hardware changes, so will the OS-provided libraries. Software using it's own libraries will need to adapt; but those using the OS-provided libraries won't.

This raises problems for multi-platform toolkits as I figure that certain functionality will be radically different on different OSes, even though they accomplish a similar thing.

Hardware Rollercoaster: See the disconnect between the user-land software and the hardware? Well, that means the underlying hardware can change more drastically. If the OS is the only one that needs to know about the specifics of the hardware and the user-land software just needs to focus on the algorithm; then people shouldn't even notice the insanity that is going on underneath the hood.

This implies that something akin to the LLVM project would be used to compile to a machine-independant level and complete compilation on the final hardware.

More Open-Source: Not the year of Linux; but the year of the open-source libraries. The added complexity of the underlying libraries will mean that they are more costly to develop. I'm not advocating using the spare time of developers to advance a corporations heavy work; but rather problems will be shared among OS vendors. Open-source is just a way to save money by reducing the amount of redundant work among companies.

If I had to wager, I'd say a BSD-style license - so each vendor can advertise having an upper-hand when marketing to consumers.

In the end... We aren't headed towards great doom when it comes to multi-core. Yes, people are panicking now as they realize all the caveats, however we've known for quite a while that 10% of the code is responsible for 90% of the execution time. That 10% of the code, in most cases, should be part of the OS and supporting library.

Final Ramblings: I referred to Apple's technology since I'm more familiar with it. I'm sure Microsoft is doing something similar - and that the .NET runtime might be better suited to the type of parallelism that I'm describing.

Now that I've predicted that nothing much will change for your average programmer; I'm going to do a part 2 - and detail exactly how I see the underlying libraries and hardware shifting. In part 3 - I'll explore how the development environment could change.

These predictions aren't final. As I learn more and read more of the results from the scientific community, my predictions may be refined over time. I won't delete these pages; but admit my mistakes and document how I erred.

One last thing. As a reader, you must have noticed the lack of references in this document. I don't trust texts without references. Neither should you. This part essentially puts forward the theory that user applications will just, in the worst case, need a recompile and keep on running using all the available hardware.

All this despite the move to multicore.

Monday, July 26, 2010

The wonders of NSString

This post is much less of a rant, and more of a regurgitation of how to use NSString as a way to solidify it in my memory and provide a useful way to look it up (I think it'll get buried and I'll forget about this post, making it a form of future e-waste.  Where e-waste is knowledge that accumulates, but serves no purpose in the present except take up space.).

So... the lowly NSString.  Where shall I begin?  Maybe with the simple idea that whatever text processing I needed to do, NSString (and supporting objects) always seems to provide a means to do it (sometimes thanks to methods added after-the-fact to the NSString object).  And also that getting a C string is almost never needed unless interacting with a C library (the standard C library shouldn't be needed).

NSString is very similar (for historical reasons) to Java's String object.  Both can not be changed after being created, changes are rather require the creation of a new object (In such cases - NSAutoReleasePool is your friend if you find yourself doing innumerable string manipulations):
NSAutoreleasePool *p
    = [[NSAutoreleasePool alloc] init];
//Do plenty of string operations using
//[NSString string*] methods.
[p release];

That's the beginning. Ok; time for some fun! Let's say there are plenty of string constants that need to be internationalized? The simplest solution is to write the following for strings:
NSString *s = NSLocalizedString(@"Hello", @"Greeting String");
If you've set up your translation files correctly (get them using genstrings), then "Hello" is the word to get in the current language, and "Greeting String" is a hint to the translator.

Can we do more with NSString? Of course! splitting, loading a string from a file, and all sorts of other operations can be done.

I'll just note a few objects that can be useful rather than give a complete tutorial on them (the Apple docs are very good).

NSCharacterSet defines a set of unicode characters.  This object provides a quick way to get all white-space, all punctuation, and even all newline character sets.  Very useful when parsing files or breaking up strings.

NSScanner provides a nice way to scan any arbitrary file.  It is much more flexible than the standard C/C++ file operations in that it returns the data (String, float, etc. - even delimited to an NSCharacterSet) and a boolean specifying whether the data was obtained.  This makes parsing more complex files... easier.  (I don't doubt that it can be done in C/C++ - just that assuming the structure might vary rather than be fixed is a nice one.  eg.  optional floating-point number would be read as 0 in C++ if memory serves.)

NSXMLParser is nice - but I'm not really a fan of XML.  Let's say that there are neater ways to represent data than XML.  XML is nice for being standard - but the tags are just too much for me.  But it's there and works great!

NSString can also do everything that the standard C library does.  Just that I don't worry about memory buffer sizes.  I'm not familiar enough with the C++ String class to comment on it.

And for speed, CFString allows quick access to the underlying data:
//Direct cast -
//toll-free-bridge as Apple calls it.
CFStringRef r = (CFStringRef)@"NSString";
CFStringLineBuffer b; //... inline buffer used to iterate...
So yes - this has devolved into a rant. It's conclusion: look at what NSString does. It does a lot more than what you think. Especially thanks to categories that allow for any programmer to give NSString more functionality (that's how it can render text to views - and how external libraries could use it - to render strings to textures, etc.).

Unfortunately, this is true of most modern APIs - and for all objects. That is; the objects can do so much. It's no longer a question of knowing how to do something - but knowing the magic keyword that will return the proper function in a search result.

OK, a mini-rant,

<rant>
Programmers now work at such a high-level. And that high-level will allow us to keep on coding as normal while everything transparently becomes parallel (something I've been thinking a lot about recently - the challenge may appear great; but in retrospect abstraction makes it easier. As a hint - we parallelize like the CPU does with instructions. Branches are our undoing - yes I'm captain obvious. I think QT Concurrent is the closest to what I'm thinking - but there is a way to provide a sequential API with parallel underpinnings. It already in the form of OpenGL. The real trick - which I'm almost done tinkering with - is how to get modules that don't know about each other communicating in a relevant way in parallel.).
</rant>

Saturday, July 17, 2010

iPhone 4 Ranting

<rant>

Note: as usual - no research, just a personal opinion.  I like the opinions page in newspapers by the way!  If there's a massive flaw in what's said below; just leave a comment.

So today; we have the results.  Owners of the first batch of the devices (as per the Apple standard, there's a problem with them) can easily get a full refund, or get a free case.  Which, I believe, to be a good result.

For those complaining about the delivery:  if they were to admit fault in the product, they'd probably open themselves to more trouble than it's worth.  The delivery ensures people get a working device or a refund (good for the customer) while minimizing the impact of the negative consequences due to the device malfunctioning (good for Apple).  It's win-win if it all plays through as expected (unless if some die-hard person/fanatic desperately wants the device to work without a case and survive a death-grip: where the death-grip hasn't been replicated by everyone).

So... now, what's more interesting than the actual response from Apple?  the comments on various forums.  I'll classify them into the following categories:
- Emotional: I don't know how I could get emotional over an Apple product.  Some are infatuated, some are angry.  It's a device - if I want to play with my emotions, give me a good story!
- Quick Point: Some use it as a bargaining chip to advocate the Android platform / open source / etc.  Of course, that platform hasn't been without it's own issues - and certain vendors do lock it down.  Shouldn't people just walk into a store, try out the machines, and see which one best matches how they expect to use it?  And if it doesn't work - return it?  In the store of some wireless provider, I'd rather muck with menus/devices rather than compare feature-sets.  Yes - some have a lot of features; but when accidentally pressing other buttons is possible, it gets annoying.  Feature-sets and statistics can be equally rigged I believe.
- Realists:  Actually work out the numbers of the statistics thrown out at the event and question their validity.  The result is that things probably aren't as rosy as the statistics intuitively say.

There might have more; but that's enough for now.

I'm waiting until the dust settles until even considering that device.  Actually, I'm more interested in an iPad.

On a side-note: people completely driven by emotion scare me.  It's not funny how people's emotions play such a role in who gets elected here...

</rant>

Monday, June 21, 2010

Rant on The Concept of Task Scheduler

<rant>
A few days ago; browsing Wikipedia; a small little detail caught my eye.  The .NET framework is finally getting a means to schedule tasks.  Task schedulers are nothing new (they have a fairly long history), the most prominent at the moment are Intel's Thread Building Blocks and Apple's Grand Central Dispatch + Operation Queues.

In general, the idea is very simple.

Threads + critical sections + error-prone humans writing code = race conditions + deadlocks + other "joys".

Ok.  That's not good!

The race conditions are never a good thing.  What we want is to get rid of race conditions (or eliminate their likelihood).  Race conditions arise from bad use of critical sections, such as two threads mutually blocked for each others' held resource.

Eg.  Thread A needs fish and thread B needs bread.  Thread B has the fish and won't relinquish until it has the bread.  Thread A has the bread and won't relinquish it until it has the fish.  Deadlock.

Eg.  Thread A should complete it's tasks first, but thread B manages to get started ahead of time due to a programming mistake.  Race Condition.

For deadlocks, we just need ensure that a task only starts when it has all it's resources available, and when it ends it doesn't keep a hold on any resources.

For race conditions.  We do the same; make sure that the order is properly specified.  Rather than specify a list of things to do with convoluted gates keeping everything running in order; we tell the underlying system what the order is as a tree of dependencies.

Grand Central (from my quick reading) helps with the former and a bit with the latter (it uses semaphores).  Operation Queues and Intel Thread Building Blocks focus on the latter (I'm more familiar with this model of operations).

I'd go into details of the specific APIs; but that can be found all over the web - and more coherent than I'd ever manage to write it.  And the APIs are moving targets, the concepts not so much.
</rant>

Monday, June 14, 2010

Rating Software Goods

<rant>
Normally, when rating a piece of software, I think numbers.  How many features?  How fast?  Kitchen sink?  It's starting to dawn on me that this thinking is what leads to abominations that Visual Basic is so well known for - forms with countless features.  Quickly wired up - frais-du-jour!

So my thinking is starting to shift towards: how can we quantify the usability of a user interface?  Looking at the runaway success of the iPhone's UI - there must have a way to assign a metric?

In the physical world, we'd call it the quality of the product.  Something made of cheap parts is expected to die sooner than something made of quality parts designed to last.  It's common sense - in a sense.

How do we build a UI?  Maybe the construction will lead to the quality questions.  Hmm, Windows Forms, drag a few text boxes, connect it to an Access database - and voila - instant abomination!

Wait, a quality UI....  That requires thought.

Recently, my thinking is that a good UI is like a good book.  A good book does not need a manual to read - in and of itself consists of the manual.  Some books are technical in nature demanding a certain background - just like certain software demands more knowledge of their users.

A good book does not confuse the reader by overloading them with information.  Neither does a good UI.

A good book will have information that is easily accessed.  So does a good UI.

A good book will have a consistent writing style.  So will a UI be consistent.

A good book requires a lot of forethought and consideration as to how all the chapters will flow.  A good UI, I'd argue, requires just as much thought.  Blindly throwing text boxes is the equivalent of a rough draft of a book where ideas are present without any links or structure.

For example, a good book will usually try to use a person's intuition into a given matter before burdening the person with the technical aspects.  Therefore, once the technical aspects are presented, the reader just needs to learn terminology as the concept already makes sense.  UIs generally miss the boat in this regard.  And that's a challenge to resolve...

Note:  I'm aware of the click count as a metric in web pages.  But I'm inclined to say that UI design trumps click count.  And good UIs don't require that much effort on the user (the comment is circular on purpose).
</rant>

Monday, June 7, 2010

Rant on Rapid Prototyping of Software

<rant>

I tend to vote for throwaway prototypes.  Something whose initial purpose is to test something, whose design doesn't really matter.  Where the prototype is not intended to become the final product.  My reasoning is that organized code actually distracts from getting the goal done.

Why?  because a prototype is supposed to answer some form of feasibility question.  If the prototype is designed to become the entire system, then something's wrong.  It could, in retrospect, become the final product - but this shouldn't be planned.

Why again?  agility.  If you're testing the stability of  the algorithm of a physical system, do you need to run it in the final application?  Most likely no - you know the performance characteristics of the algorithm and any computational limits that are imposed.  The goal is to write, as quickly as possible, an application to display the results and to set up the system with harsher conditions than what is expected in the final application.

Expanding on this idea - if the code is expected to be used as-is - maybe the programmer might spend extra time organizing the code, making it legible, documenting.  Actually, a prototype just needs code and a person that understands it.  Organization is over-rated - whatever gets the code written the fastest is most important (why use accessor functions when variables can be directly accessed?  why comment if the algorithm will be quickly swapped out?  why worry about aesthetics of the visualization when the goal is to observe stable and correct behaviour?

Again - let's consider a user-interface.  A very touchy subject, as reading a user's mind and knowing what's best for them is actually harder than it seems.  A good user interface does not need a manual as it is intuitive  (that's a comment on the current state of UIs though - if training on a UI is needed - something's wrong in my opinion).  I believe it more to be a hit or miss situation - UI will either be good or bad.  It's like throwing plenty of things at a wall and seeing what sticks.  Now - if we create a prototype designed to evolve into the final product, then there is a layer of organization which implies assumptions.  There is structure.  There is documentation.  Comments.  Plenty of work that has the real potential of actually being trashed rather than seeing the light of day.  It should be done as fast as possible just to ensure that users can test.  It doesn't even need real data - just a realistic progression so that users can judge of the ease of use.

What could change in a UI?  Buttons need to shift around.  Sub-windows?  Lists?  Custom controls?  Special effects?  Each of these could be added and removed at will.  A good UI should have several test versions.

Then - what happens when a prototype has discovered the path to follow?  We pillage what's salvageable and good, as quickly as possible.  We document.  We comment.  We organize.  And tackle the next challenge.

Isn't a small amount of spaghetti code to prove a concept worth it if concepts get validated or disproved quickly as opposed to developing a large framework?

As in quickly - a prototype should not take more than 4 hours.  It should be focused.  If the prototype includes building a functional version of everything - then maybe you aren't building a prototype...

</rant>