I am a Senior (Lead/Gold?) Full Stack Developer (or, if you really must prefer, Engineer).
I have too much experience for it to look good (on a CV).
I have skills across multiple disciplines (to the point of it being confusing for those trying to categorise me).
I enjoy debugging, investigating, documenting, testing, discussing and writing code (and infrastructure).
I am at my happiest when working towards a pointful emotive project surrounded by passionate and friendly people.
I can be prone to waffling, but I make up for it by trying to help everyone I work with.
I prefer any stack that involves any flavour of JavaScript, but I can handle pretty much anything needed.
If you don't have much time, check my videos below which should avoid a lot of words.
And yes, I can lead, plan meetings and mentor, I just prefer not to lose my coding roots.
And no... my role doesn't require me to be in the office (but coordinated, planned, collaboration days are brilliant!).
A (recent) note about recent projects
I am not exactly sure why, but every time I end up searching for a job,
the fates —or something that some might call "the haphazard unfolding of reality"—
seem to fall in such a way that I am unable to show off my of my recent proffesional work.
At least not in a live, usable context —which is what most people expect.
This is either due to projects being built for private usage, or due to specific
contractual arrangements, or the project having been changed over time away
from what it was when I started, years back.
For example, much of the hard work that I put in for my current employer, is
either behind auth accounts or has been recently turned off due to a change
in stack (brought in by the company that bought us).
This is why I rely on showing off my personal work, as it is more reliable.
It does —despite what many might think— require a lot more of my skills
and drive than any proffesional work ever has. Because I have to fit it into ever
shrinking pockets of spare time.
However, I do realise prospective employers may not recognise it as the type
of work they expect.
This leads me to... Analysis
So, I have also been running this profile site for close to a month now, which
means I've been able to check over the analytics to see what is working and
what isn't.
I've listened, and taken everything on board that you've been telling me.
And for these two reasons, I've created this explanation video below.
Covering just some of my recent personal work, but also trying to explain
it a bit more so that those watching can see how my efforts do still
align to the kinds of skills and abilities that are expected of a
Senior Full Stack Developer.
Building the Codelamp.co.uk Site
Again?
Every time I have found that I've needed to level-up my career, I have rebuilt my profile site. I have done this
for multiple reasons.
It helps me catch up with anything I haven't been able to focus on in my day to day work.
It gives me a chance to show off what I can do, rarely does a position test all my skills.
I don't believe a single sheet of A4 (CV) is suitable for something as important as anyone's career.
As such, I have been working dilligently towards this demo site for a while now. We reached version 5.2 because
version 5.0 and 5.1 were not good enough, and fell by the wayside for various reasons.
And, because that kind of attention to detail and effort tends to be overlooked, I feel that I need to highlight
it and explain. The problem with that approach, is it tends to lead to a lot of —eye-glazing— verbosity.
So, in the spirit of being a bit more succinct and flashy, I put together the following
demo reel:
Video & Code Hybrid
Now with all the crinkley-bits around Norway done and out the way.
Here comes the inevitable and quite long-winded explanation. Apologies, but
you should have seen this coming. I did warn you.
I have A LOT of experience with optimising things for browsers.
I have been building UIs in JS, HTML, CSS (and Flash) for as long
as it was a thing to be doing.
And back when it was possible to keep the scope of what browsers
could do in your head, I gained a knowledge
comparable to QuirksMode.org —in part thanks to QuirksMode, but
also in part of me constantly testing and trying things.
These days, the scope of what is possible online is too complex
to have a general overarching knowledge. You have to specialise.
And my specialty is creating browser-based things that push the
edges of what is possible without causing too much
jank.
I need to know how to optimise for my indie games, I've needed to know
for each one of my profile sites,
I've needed to know to safely build the kinds of sites I've been employed to create.
And as such, in order to achieve the kind of experience I wanted.
One that was bigger and more intensive than my previous sites. I
knew I would need to mix video and code together —in order to avoid
performance problems. The kind I had hit into in the past, when the
main thread is doing too much.
Thankfully, I have experience in video production due to being lucky enough
to go to a school that had it as a optional subject. It had a big impact on me,
and would have been where my career might have gone. That is, if the university I
had planned to go to didn't cancel the courses I wanted to do, out-of-the-blue, last minute.
And, simultaneoulsy friends from college didn't ask me to help them out with
starting up a Web Development company, called Fubra.
As they say, the rest is history, but a history that definitely made it
easier putting together the clips, tracks and edits I needed to achieve
the end result you see above. I hope you liked it, I hope —in a world of
too many AI broadcasted job applications for job postings managed and created
by AI— that it stands out. And if it doesn't, well... I learned a lot again from
making it.
This is an example of one of the composite layers, using chromakey
to give the transparency. It was all rendered using handcrafted
PIXI code, enhance with help from Claude, and coded in Windsurf.
Building the layers
The video is formed out of many layers, all of which have been
created by js code, and some have been captured to video. The first layers
were constructed using Pixi.js. These were then composited using
Filmora, and I could then add some effects (both visual and sound).
The secondary layers and titles are handled live in-browser, using js to orchestrate
the timing with the video. Again using Pixi for some of the more intensive
elements, and HTML/CSS for things I could get away with.
There's quite a lot going on in the background to keep things performant.
Whether it is preloading resources, or batching operations to add or remove
things, but also making sure to time everything so we don't do too much per tick.
The various features that have been added, e.g. title effects and shakes, have
also been tweaked and tuned to make sure they run smoothly.
But part of keeping things smooth has also been choosing what not to do, e.g.
removing certain filters and additions that clearly caused problems. I would
like to have added a drop shadow to the final scrolling text —but some browsers
couldn't handle this, and I haven't had time to upgrade that text from CSS/HTML
to Pixi.js (to see if it would help).
Serving Video & NOJS
On top of trying to be performant, I have also tried to keep my bandwidth usage
down. This is definitely my most-involved personal site so far, in terms of
cloud technology.
And as we all know, cloud costs can creep up.
So when it comes to video, I have tried to reduce the amount of data that is
downloaded, at least initially. Mainly so I don't get hit by bots and other
visitors that aren't going to be using the video.
This has involved processing the video using dash, essentially breaking the
video down into smaller parts. Which are then loaded and stiched back together dynamically.
Doing this has meant that browsers don't suddenly start downloading the whole video.
They just load a fraction until play is pressed. It also gives the benefit of being
able to support lower-bandwidth devices in a good way.
Why do you need to worry about this? Why not hide the video away in a non-landing page
and allow users to signal their intension to use the video by navigating?
I hear you ask across the bouncing lines of streaming internet information.
Well this brings me to my other point that is outlined in the title. NOJS.
Even though I am a JavaScript developer, I understand the benefits of getting sites to
be functional without JS. I have seriously always subscribed to the progressive
enhancement philosophy — only using scripting to enhance the experience.
This is not only good practice in terms of design, but also in learning, and it also tends
to benefit people who have accessibility or security concerns. It also helps for automated
systems to navigate your site, which can also be a good idea —to a degree.
And doing this has become easier and easier as CSS got smarter and smarter. I should know,
every one of my sites has slowly got better at doing more with less code. True, CSS is now
a beast. And, to be quite honest, it is definitely pushing beyond its domain. But if applied
in clever ways, you can achieve almost every feature (usually to a lesser degree) that you
can build in JS. At least in terms of navigation and animation.
And it is for this reason that this site pretty much loads EVERYTHING right at the start.
There aren't any deferred requests to get more HTML, CSS, or JS. This is because, if I did
that, it breaks things for the NOJS experience.
So this also means that all the videos, and all the images, and all of everything is available
to the browser from the moment the source is parsed. And the trick becomes not progressively
enhancing, but actually speedily blocking normal browser behaviour (which is to try and please
the user by preloading everything).
There are obviously caveats to this behaviour. At the moment we do have something that is like
FOUC (Flash of Unstyled Content), but not. I mean, the content is styled, it is just content
that probably shouldn't be being viewed right now (CTPSBBVRN?). This is because the browser
has yet to navigate to the section we are supposed to be.
All these issues are overcome-able (that isn't a word) with enough time of tweaking and fixing.
I've done it with my past sites. It just takes time, so I am still working on it. However, this
brings me back to what this whole site is about.
Getting a new job. But in the spirit of that, let's explain more about this site.
To those with an eagle eye, you'll notive that this screenshot is not of SSR render code.
It is instead code I've written for an experiment using XENOVA transformers. But well done for spotting.
My Own SSR System
I have spent quite a bit of time working with various frameworks over the years. You know the ones - Drupal,
CodeIgniter, Symfony, Joomla, Robotlegs, Sails, Express, Next.js, Nest.js, React, Angular, Knockout, Vue, Ember,
RiotJS, etc. This is mostly because they do make business sense. As long as you are happy with about 80% of your
requirement being met. They also tend to make it easier for companies to hire people to work on them.
However, whenever it comes to learning something, there is nothing better than trying to build the system
yourself. I have done this time and again, with pretty much anything you can think of. Just for the sake of
learning. I am a self-taught coder, which is why I was fully fluent in a number of languages before I even started
college (I started coding when I was ten). It is also why I know why things work, or aren't working, without
needing much investigation.
So, when it came to job-searching again (and rebuilding my porfolio site for the fifth time). I took the
temperature of the current Zeitgeist and knew that the system should be based in WebComponents.
I have of course been looking at all the frameworks that have popped-up around WebComponents, and whilst I've
been happy to tinker with some of them. I haven't found one that allows me to do the kinds of things I want to do.
Which is to mess around. Like most frameworks, they are all rather constrictive. And at the same time, they all
still come with build processes.
BUILD PROCESSES NEED TO GO THE WAY OF THE DODO!
(and by that I don't mean imported by sailors into Europe)
That is why I decided to implement an SSR system that would be simple, and just use Node and JS template processing
without any dependencies beyond Express. It would help me learn the pitfalls that other recent frameworks will have
no doubt hit into. And it would allow me to better support difference devices and to serve my site quicker —rather
than relying 100% on client-side render.
How It Works
The whole thing sits on three main pieces:
ServerFragment - Simple template functions (the highest level)
ServerDom - The hidden bit that wires all the templates together
Express.js - Because there's really no point redoing this part (at least for me)
But over-archingly all the system is, is a set of template functions that use backticks strings to process
javascript, that are then stacked within higher order functions that can manipulate the templates to get the output
they need. I probably won't win any awards in security, or power, but it works pretty well for a system that hasn't
got any dependencies to handle the modular or ssr aspects. And is built out of these piffling seven files.
ServerFragment: Components, But Different
The ServerFragment is probably the most interesting part. It's basically a way to write
components that:
Handle props (Yes, I spent far too long working with React)
Can cache stuff (a developers best frenemy)
Work with async code (because to life is to juggle)
Process templates (in a modular and hierarchical manner)
The ServerDom part is where things get a bit more interesting. It's responsible for:
Managing variables (because global state is just as useful as local).
Running some stuff in parallel and other stuff in series.
Handling scripts and styles (because you can put these things everywhere these days).
Keeping track of components (otherwise it would be a terrible framework).
One of the trickiest parts to get correct was the parallel processing, and it may still have some bugs. But it
means we can speed up the server's response quite a lot.
The Highlights
Caching
The caching system is pretty straightforward:
Want to cache in memory? Use MemoryCacheStore
Need fresh content every time? NoCacheStore has got you covered
Want something else? Just build your own handler.
I haven't had any need for shared caching as of yet, but it would be simple to link into a Redis cluster or
memcached system.
Parallel Processing
Ever since learning about the basics of parallel versus serial execution (in nearly every language I've ever
fathomed), I have found it very interesting topic. It is also clearly a topic that people who build languages also
find interesting/frustrating.
What we are talking about here isn't what most programmers would call parallel, we're not running multiple threads
or deamons or anything. We're talking concurrent execution of async operations. But it is still very useful,
especially on the server side, mostly because —with Node— you really want to avoid locking up the event loop.
// Let multiple components do their thing at once
SD.parallel(template, async () => {
await Promise.all([component1, component2]);
});
Resource Handling
There are a few different ways Scripts and Styles can be added into a template:
tag direct — as in, it will render a script:src or a link:href tag directly at that point in
the template. This is the simplest usage, and can be useful sometimes, but most often the other two below are
better.
tag grouped — which means the tag is grouped (and made sure to be unique) and then injected
using the grouping variable (deferredVar) you've chosen. This allows multiple templates to ask for the style or
script they need, without worrying about duplication in the final source. The defferedVar makes sure that we only
get the set of styles or scripts that were actually needed.
tag injected — which means the source is loaded up on the server and directly injected into the
page. I find it quite funny the evolution of what the web has been through. This is that option, and whilst it
makes your source code ugly, it definitely helps with rendering speed.
// this is an example of how to use defferedVar to create script groupings
// these variables are rendered right before the final render is sent down
// to the client, so the template functions whether async or not, can extend
// those variables as they see fit.
const example template = `<head>
${deferredVar.joinTags('dynamicScripts')}
${deferredVar.joinTags('dynamicStyle')}
</head>`;
// this will combine all the css into an embeded <style> tag
// meaning we gain the ability of being modular, but the speed
// of being embedded in the page.
${await styleLinkInline(
'styles/core.css',
'assets/fonts/core.css',
'styles/PageHeader.css',
'styles/Projects.css',
'styles/view.css',
'styles/fullscreenAndExits.css',
'styles/animations.css',
'styles/nojsAndLazyLoad.css',
)}
When Would You Use This?
Realistically, no one else will use this. But... I did use a similar system to
power a very lightweight API I developed for Trouva. We had the entire API running
in a Lambda, that also needed to output some HTML, and we didn't want to add something
heavy to render templates. That version didn't even use Express, it had bespoke request
handling, but it did use this same templating model, and it took it further with the
variables to support language translation.
How Components Get Registered
Nothing fancy here, just good old function registration:
const name = SD.register(templateFunction);
Template Processing
Something my head has always been rather good at, or bad at, depends on how you look at never-ending loops —is
recursion. And that is what the template rendering uses to handle its hierachical structure.
// this is the main processing function, that takes a template function
// and handles rendering it down. As you can see, it looks deceptively
// simple. But due to the recursion it can achieve a lot of things.
export async function processFn(templateFn, vars, iterations = 0) {
if (iterations > 100) {
throw new Error('possible infinite recursion');
}
try {
let result = await templateFn(vars, SD);
if (result?.includes('${')) {
const templateFn = new AsyncFunction('vars', 'SD', `return `${result}``);
return processFn(templateFn, vars, iterations + 1);
}
return result.trim();
}
catch (exception) {
console.log(templateFn.toString());
console.error('Error processing template:', exception);
}
}
Variable Handling
For now there are two types of variables, global ones, and those that are passed at render time into the template.
The global vars can be built up by any layer of template, which is fine, as long as you know what you are doing. At
some point I might try and work out how to achieve local state. But at the moment, the mixture of call time params
and global state has worked well.
SD.setVariable(name, value);
Why did you tell me/us all this?
As stated a number of times —and if you've read this far, well done— the
purpose of this site is for me to find a new job. A position that I will be
likely focusing on for the next decade. A job that involves a passionate team, working
on useful, intelligent and/or meaninful things. If that's you or your team...
Hopefully, now, you have a better idea of the kind of person/developer I am.
Yes I do waffle, but I also do try not to.
Hopefully you have more of a clue about the kinds of technologies I can
eloquently thread together to achieve target designs or planned outcomes.
Hopefully, I've done that through something complicated and simple, yet esoteric and useful.
A phrase which pretty much covers what I've done for the last few decades.
Please note: if you deal in any field of Gambling or Dark Patterns, ranging
from Online Poker to Dodgy In-game Purchases and Loot Boxes, please let's not waste either
of our time.
I have now been working professionally in web development for over two decades. Having started —professionally— in (or around) the rather indescribable year of 1998.
Over this time, I've realized I haven't explored half of the technologies and frameworks as deeply as I should like; and I appreciate less than half of the languages and tools half as much as they deserve.
Or, something like that.
As it stands, I've worked with quite a number of different technologies... Some for a long time, some for fleeting moments.
But everything, no matter what it was, has helped me learn —yes, even Flash. Meaning that these days I can build pretty much anything... well, as long as it exists on the internet, or at least somewhere near it.
What are you looking for in a position?
I have to confess I’m not entirely sure.
If I’m honest, I’m not really looking to get back into Marketplaces any time soon, as I’ve been there a while —and done that. I’ve also had a bit too much of the ‘startup mentality’.
Don’t get me wrong, startups have been where I’ve met some of the most enthusiastic and brilliant people. And if the right one came along, then definitely. But the actual act of working in a startup takes a lot, from everyone involved. Which I guess is fine, if it pays off… but it seems that it can very much be a pipe dream. I’d have to really believe in the message and the ability of the company to sign on. But if I did, you’d find no one more passionate or hard working.
Really, I don’t mind a change at the moment. As long as the position has quite a bit of focus on JavaScript, I’ll be good (that doesn’t mean sole focus). It would be nice to find some stability somewhere too, but perhaps that’s asking too much from the world at large these days. It would also be nice to find some forward-thinking companies again, and not just those trying to throw AI at everything till it sticks.
Forward-thinking for me would be a team making use of AI in a human-beneficial and environmentally-balanced manner, not replacing jobs with it, and looking into the next stage of the web. Which is definitely going to be AI Agents, paired with human-designed and built systems, leveraging Rust/WASM for performance, and JavaScript for speed of development. They may even be using Go for simplicity and parallel performance, or C# for its integration ability with existing systems. Really, more companies should be making use of the gains with WebGL/WebGPU too —I rarely seem to see this however, beyond what is auto-handled by browsers.
The most important thing to me, however, —no matter what tech— has always been the team I work with. If they are hard working (yes that includes smart working) generous, and passionate about what they do —then it sounds like a position I’d be interested in.
I do have some red flags in terms of where I won’t work, but you can likely find out what those are from reading more information form this site. Typically it boils down to places that promote gambling or fintech (that isn’t trying to simplify the space and help people save).
About this site
This is the v5 generation of my Codelamp site. It has gone through many incarnations, but its reason for existing is still the same. To show off my skills, and to explain why I tend to approach things differently to many other developers.
For starters, you might think that this site is either hand-coded, or perhaps built using a CMS. Both are true, to an extent, but not in the way you would expect.
The outer shell of this site, as always, has been lovingly crafted from scratch using a blend of my hard won experience and all the recent things I have learned (but wasn’t given the chance to try during my professional work).
This time, that covers:
WebComponents
Websockets
Paintlets
Hosting Video
My own SSR
Numerous interesting other JS-isms
The content however is managed by Notion™️.
I had the idea of using Notion as a headless CMS a while back, but have only now had the chance to put it into action. And it works quite well. I can use Notion’s brilliant note-taking interface to write my page content, and then pull that data using their API, and then finally wrap everything with Node-based cleverness —which generates the actual site.
Why use WebComponents?
This one should be obvious to anyone that has done any pure HTML development on a larger scale, or Component-led development with frameworks, like React.
Components help with a great number of problems that developers face:
Encapsulation
Reuse
Demoing / Explaning the work
Testing the work
For a long while, if you wanted something repeated in HTML, you had to repeat the HTML. Which is fine if you have a server-side language that is helping you duplicate said HTML. If you haven’t, then you enter into the world of copy and paste —and all the joy that can bring.
WebComponents allow you to have this duplication/reuse all kept together under a tag interface. And whilst it isn’t perfect, it has come a long way from the early attempts of libraries like Polymer.
As I’ve mentioned elsewhere in this site, since getting burned by Flash (it just evaporated away one morning), I am keen to stick to as much native handling as I can. So, now that WebComponents are pretty stable and reliable, I’d happily argue that people should be using them directly —instead of requiring often heavy frameworks to wrap things up with syntactic sugar.
Yes, there are annoyances to them, but that just means you have to learn how best to handle those situations. And once you have, you often find that whatever frameworks were giving you was just getting in the way —or, forcing you down an opinionated path.
About my Projects
I have been pushing forward various ideas from vapor to something more solid for much of my life. At least from about the age of ten. I still recall the first game I ever decided I wanted to make, Nut1000. Which was a strange mix of flying squirrel meets RoboCop.
This of course never transpired, I was far too young to bring anything like it into reality. But the drive and passion to do so was ridiculously strong, and still hasn't died.
I still try inordinately hard to get my Projects off the ground… Even when the only time I get to work on them is squeezed between all of the things expected of a ape-like descendant, one that has come down from the trees only to discover mortgages don't grow anywhere else either.
Why do you work in web development then, rather than a games company?
Well, predisposing of the idea that a games company would actually want me. I have also decided, through much research and investigation, that I don't think I want to work at a standard game company.
This might be changing, as from what I hear, Indie companies are starting to take off, and perhaps stealing work and Devs away from the AAA+ sector.
But, the kind of work and projects that the majority of these games companies tend to work on, is not what I want to be doing.
I want to be building proper web based game experiences. These are just like websites really, but websites with damn good graphics, high performance, and a heartbeat.
Most game companies wouldn't know where to begin with this. And those that do, rarely aim for the kind of quality that I'd like to achieve. I'm not talking about online poker, or flappy bird clones. I've never wanted to create things because they'd be popular.
I'm trying to create the kind of things I want to play.
And trust me, I'm never normally on the track for popularity.
Why haven't you achieved an actual game then? If you've been working at it so long?
I have been pushing towards these ideas way before browsers were ready.
Now, that will definitely reveal my early naivety, but I don't mind. Because where I don't have a fully fledged game, or even a playable demo, all this trying again and again, re-coding and re-developing has given me valuable insight into how browsers work, how games work, and very much helped me do my day job.
These days however, technology has caught up, but infuriatingly I now have much less time than I ever used to.
However, the silver-lining to this is that it has forced me to “up my game”, as it were. I have to develop on very tight time periods, and I have to be very modular… otherwise, all the effort is wasted when I'm next able to pick up a project. Because the next time I'm able to pick up the same project could be months or even years.
This is where I’ve found LLM Agents are helpful, not so much in building anything from scratch, but in terms of analysing code state and finding helpful ways to combine multiple POCs together —it has been quite invaluable.
And I have A LOT of POCs.
Aren't you worried that by the time you actually release something, AI game makers will steal your thunder?
Yes, and no. I would be very worried if I was building the kind of games you can throw together over a weekend and chuck up to an app store to make a quick buck, but leave players with a terrible/monotonous experience.
But each of my projects are unique and difficult to achieve, almost on purpose. I don't think my brain will let me do things by half. But I also know that the current state of AI won't even let me create half of what I've already constructed. I've tried, and the results are comical.
It might well be that things will change rapidly over the next few years, but they may also not.
And even if they do get to the point where anyone can think of a game and then create it from a sequence of prompts, I still don't think I'll see a game similar to those I'm working on. And… if things get that fast and good, I'll already be way ahead everyone.
True, if game creation becomes so simple and throw away, something may be lost in the desire of people to play such games. But, I'll remind on what I said before, I'm building the kind of games I want to play. The day that I can actually play these ideas that have been banging around in my head for decades… will make me very happy indeed. And then I'll be able to move on to the other ideas I've had that have forever taken a backseat.
Don't you think you could have created a full game by now, if you'd focused on one idea, rather than… erm, I'm not sure how many??
It is pite quossible, but I wouldn't say hugely likely. The reason being seems to be the way my mind works for my personal projects. Coupled with what I've already stated (browsers have only just recently become good enough), my brain seems to have a finite amount of energy to spend on any given thing until it needs to recharge.
I've found that the longer I focus on one task, the more tired and half-hearted my implementations become. But… if I give myself a break (a few days or so) I find I'm quickly back to good ideas and working out the best usage of my time.
What I also found was that if I pick up another project, in that time, it's like the newly picked up project has full battery. And this seems to work for as many projects as I have ideas.
True, I definitely am working on too many things now, directly because of this behaviour. I'm fully aware of this. But I have absolutely got better at prioritising since working at Trouva, and am now able to group my projects together. Meaning that even though you may see six or seven simultaneous projects. They are in fact whittled down to just three really.
Polydust
Holodust
TEOWAD
What on earth are those? They sound like new projects to me/us??
Yes, but they aren't. The first two are my game engines. I have two, one based in 2d powered by Pixi and Matter (Polydust). And one powered by three.js (Holodust).
Polydust will work for all of my 2d game ideas and has been designed as such:
Exit
Harmsway
ALWTM
Mote
Gadgets
And Holodust will work, for now, only for Wote.
The third is a book. Yes, I know, don't start… but I've actually been working on that for longer than my games.
Don't you think that you might have published your book by now, if all this pesky game engine work hadn't gotten on the way?
Oh my word, no.
Writing a book is the single most difficult thing I've ever done, but it has also been the most rewarding. If I had published early on, it would have been terrible (that doesn't mean it still won't be). It has taken this time because, inexplicably, my brain works differently for writing my book to working on my other projects.
Whilst I seem to be in charge of making my games, I am definitely not when it comes to my book. It seems to write itself, but very very slowly in strange sudden realisations and complicated thoughts that I have to translate. Weirdly, that makes it sound much more officious than it is, in truth it is a very silly book —but like my other projects, it seems to make my head happy.
And in keeping with such a plan, and after many many hours of
trialing different things; and even messing around with music.
I've put together a logo cut-scene. It still needs some work,
but isn't too bad.
Longevity
Spiraldust Games will focus on Web Games, which means finding sustainable funding. Servers aren't free, and without major investors or aggressive advertising, we'll likely need a minimal subscription model. I only believe in subscriptions when customers get ongoing value — so I'll need to deliver!
When I say "Web Games," I'm not talking about MMORPGs, gambling portals, or quick-cash clone games stuffed with in-app purchases. I mean thoughtfully designed, unique games that need nothing but a browser to play. This has been my vision since I first encountered the internet.
People make a lot of assumptions about Web Games and Browser Games, mostly due to poor examples from the past. While some of this stemmed from browser limitations or developer shortcuts, modern browsers are incredibly capable with HTML, CSS, JS, and GPU access.
These games will leverage the web's strengths. One key approach is releasing content in "pages" over time — starting with an initial chapter or level, then expanding gradually. I've always admired this model, like how XKCD has built an enormous catalog one comic at a time, even if updates seem slow in the moment.
This growth model shapes the design of each game. Pebbl and Exit will follow this format closely, while ALWTM is more traditional — though it can still expand with new puzzles and content over time.
So no game will ever be truly "finished.". And whilst this might sound like an argument towards Vapour-ware, it truly isn't. It is trying to find a way to levearage the benefits the web gives, rather than ignoring it. But my apologies do go out to the completionists out there.
A Long Way Till Morning
I didn't realise quite how aptly named this game would be when I started.
But it definitely has been a long way to getting it even remotely near to
something that might actually exists.
I do not mind however, it is my hobby, and I will get it completed one day.
True, by that time you might be able to ask an AI Game Maker to build a similar
game for you in an afternoon... but hey, such is the way the cookie progresses
in really unpredictable ways.
Progress
This is a recent render test taken from the Polydust Engine, which shows off
the systems abilities to provide what the client needs depending on the current
camera/viewport settings.
Ideas that won't leave me alone
ALWTM is borne from multiple ideas that came out of nowhere and proceeded to not leave me alone from then until now (and very likely tomorrow, or the next decade).
It was the first idea that I had to be a true Web Game. A personal gripe, one that I've been subjected to time and again, when I mention I aim to build Web Games is that people (or recruiters) liken them to Angry Birds or Bejewelled clones (which are at least games) or —even worse— online gambling.
No, online gambling is not gaming.
Do not contact me about anything to do with online gambling.
So, for the uninitiated, what are Web Games? From my perspective they should cover the following:
They should be easily accessible and work on as many devices as possible, the only way to really achieve this is to run in a browser.
They should have a proper story, something that has taken effort to create.
They should NOT try and co-opt money out of people at every chance they can, or use dark patterns to addict players.
Explaining the story
Because my animations skills are slooow, I needed a way to explain the story I wanted, but without me spending all my time working on the animations. So I decided to try and build some graphic-novel style (animated) pages. The animation here would be simple loops, so nothing too difficult. And in places AI can help speed things up.
I have a rule on using AI. Not using it for anything that I couldn't do myself. So if anyone has an issue with my using it for images or asset generation —let them be aware that everything I have used it for, I could have created myself. Much of my assets are hand drawn or photographed. But in some areas I have used AI to help speed me along.
And I only decided to do that because I've already been working for close to two decades on my projects. If I manage to get anything off the ground —and, in the unimaginable state of making money in some description. I will definitely look to paying real people to help. Who wouldn't prefer to work alongside a skilled human, rather than a bundle of mathematical probabilities. But in order for that to happen, I need to have the funds.
As you can see, they still need work, and these are just the still versions.
But they hopefully get across the idea of the bits that won't exist in the
actual game play, but they will appear in areas when I need to show off more
detail of the character —to give it a more graphic novel kind of quality.
Premise
The idea is to be a mixture of a point and click adventure, but also an action platformer.
I want to see how far I can push the 2D paradym. I'm not a big fan of 3D for storytelling
(with the exception of Half Life).
So ALWTM was designed with the idea that I could take photographs and hand drawn assets
and build out 2d layers that the character could move through. Using specific conventions
to allow the player to move between the Maps (the zoomed places where the player can
point and click) to the Worlds (where the player can move around more like an action game).
The main trick here has been to use parallax depth to good effect (just like brilliant
games that came before) and allowing for points in the Worlds where the layers can be
switched, to allow to travel deeper into the world.
Escape the XI Terminus
Another long-standing game, one that I've been trying to create from the moment
I saw the Xiaoxiao
animations, many years ago. This game, however, I knew would take a long time, as I
needed to wait for web technology to catch up.
The idea is to have a character that can handle the kinds of moves that would
be expected of a stick man that knows martial arts. It is more complicated than
that though, because I wanted the control system to be simplistic (even workable
on a simple tablet), allowing for moves to essentially level-up, but only with
good timing.
Complexity
There have been a number of attempts again for this game engine, mostly focused
around the character's movement. It has been a lot of work, but I'm not complaining.
This effort has led me to learn a great many other things, again —just like with Pebbl.
Spine (animation/rigging software)
Inverse Kinematics (ok, I don't fully know this one, I've had a lot of help and it is still something difficult to control).
2D ragdoll physics
Polygon simplification and slicing
Raycasting, Bezier curves
Below you can see an old render of the animation, from a previous engine.
The art of dexterity
I've tried multiple implementations to reach a behaviour I would like.
These ranged from specifically animated skeleton structures, to those
powered by physics or IK, they've all had their benefits and detractors.
What I've reached now is something that is half way between manually
animated, and modes that can be switched into to give a rag doll effect
when the character is no longer in control of their actions.
The key reason for this is programatically controlling such a skeleton
will work in some ways, and then totally flake out in others. I could
probably get there if I had the time to spend on it. But I simply don't.
And whilst hand animating things does actually take quite a bit of time,
it isn't anywhere near as complicated as the former.
Exposition
There are definitely two major influences, beyond the Xiaoxiao animations,
that I've drawn from. Both are probably quite obvious, although
you may have to play the game to realise the first one.
Portal
There are clear story elements that are an homage to the Valve™ game,
Portal™. But I make no apology for this, Portal had a massive effect
on me in terms of how I view computer games. I'm sure I'm not the only keen
Indie Game developer who has listened through hours of Valve dev.
commentry to learn various tricks, to see what worked and what didn't,
and to know exactly the kinds of things that should be left to decompose
is soft peat until the revolution comes. Needless to say this was
Valve's second game to subvert and upend my thinking, 30 guesses as to
which was the first.
In the event of a fire...
The other influence should be quite obvious just from looking at the artwork.
And is one of the reasons why I've felt compelled to complete this
game. There is a particular image that you can spot almost anywhere you go.
It depicts a man in constant danger, frozen in time, surrounded by a sea of green, and...
they are very definitely always running away from fire.
I have purposefully not aligned my artwork too heavily towards that highly
stylised (and no doubt copyrighted or protected) signage. But, hopefully, people
can get the feel of it seeping through the style and the story. Put simply, this
is your chance to rescue that man from his ever imperilled state.
Will you manage to help?
This is the intro to the game, the idea of which has stayed pretty consistent from
the point when I first started working on the concept.
"A character, frozen in some kind of statis box, surrounded by a green liquid is being transported. They are being carried, dangling from a mechanical rail through darkness. Suddenly there is a malfunction and the box drops. We hear a crash. The view pans down from darkness to light, to reveal the box has smashed, the green liquid has poured out across the ground. And the character is laid out, unconscious on the floor of a small white room. Alarms start to sound, the character awakes and the door to the small room shoots open."
...and thanks to a mixture of hardwork in Photoshop, using all my learnings in prompt engineering, and my video production knowledge. I've managed, after more than a decade, to turn my idea into an actual game cut-scene. Which I'm quite happy with.
AI?
Beyond the various rendering handlers, filters and control systems,
I've also been trying to modularise the animation handling. So that
I can perhaps hook things up to a trained AI. However, I haven't had
the time to truly focus on that. Just another flurry of POCs.
A model should be able to state, for a given animation movement,
how a head, arms, legs would be individually moved. But... that
is more of an idea/experiment at this time. Something I would love
to get into, and with the rapid advancement of ML and AI tech I
might be able to get something working soon. I have found a number
of browser/node-possible models that are able to determine a human pose
from an image (to varying degrees of success). But nothing as of yet that
can work out how to get from one pose to the next.
Tunnels vs Platforms?
One thing that —because I didn't really understand the relationship
between most physics engines and polygons when I started— caught me
out with this game's design, was the idea of having corridors that
could move.
You might think that platforms and tunnels aren't that different. And
depending on how you build them, they don't have to be. If you have
a simple tunnel, you can build it out of platforms arranged around
the area, and job done. But of course, my design didn't give rise to
simple tunnels.
My design, needed corridors that could intersect and move, even angle
and scale. Initially I thought this would just be simple poylgon
interactions. And if I had built the physics engine myself from the
ground up, perhaps it could have been. But... I don't have the time
for that. Not if I want the game to actually get built. So I tried
to use existing physics engines, and quickly realised that they don't
have the concept of an inverted shape.
What I mean by an inverted shape is a bit like how you can use a
subtractive shape in a graphics program. You set a positive (additive shape)
and then use a subtractive shape to cut a space out of the positive shapes.
I had just assumed this would be possible. But it seems the maths involved
just isn't easy for a physics engine.
So much of my work has been trying to create a physical world that starts
off fully solid (walls), and allows me to cut shapes out of it (corridors).
But "faking it" using a normal physics engine. —if you followed all that,
well done!
I managed to get to a good place, but one that might not be as performant
as I need. It also has bugs when particular arrangement of corridors align,
so, if I am going to use it —I need to do so sparingly and with the right
configuration.
In the above demonstration, you can observe the dynamic movement of solid
walls. These walls serve as barriers, preventing the crate and other debug
elements from exiting the corridors. The system traces the outer edges of
the polygon shape, including intersected or angled polygons, effectively
defining the boundaries of the playable area.
This is running in Chrome on a MacBook Pro. But it isn't quite as performant
on smaller devices, which is why I'm still looking into the best way to achieve
what I want. This is definitely the closest I've gotten however, and it doesn't
take a huge amount of calculation to achieve. The issue mostly is trying to
combine this activity in physics along with the reflection filter (and in the
future, fire effects too).
I'm currently experimenting with an engine that augments Matter js, but
we'll have to see where that gets to. Where problems like you can see below
won't be an issue.
And then there's testing with adding dynamic lighting, this I still need to
improve the optimisation of, as it is effecting the frame rate. But, despite
the awkwardness in the screen recording, you can see where things are heading.
And then there's the water tests. I've been trying out various ways to get natural water effects.
That also abide, at least as much as possible in high performance 2d, to physical objects. This video
shows my best test yet.
The End Of Worth & Distance
I have probably been "working" on this book for far longer than any of my other projects, and that's saying something... But I use the word working in the same sense as an occassionally overflowing water source might work towards erroding the side of a mountain.
I should be clear, my speed of work isn't directly my fault. There were some years where I barely got any spare time outside of my proper work. And then there were the years that I no longer remember very well thanks to child induced sleep deprivation. And then there were the years where I decided to rewrite lots of it, which, ok, that one probably could be pinned on me. But that was definitely a good decision because what I have now —in terms of the story— I'm very happy with.
The process
I tend to get the question from people who know I've been writing this for some time, as to whether I'm near the end. And to answer that is tricky, because writing a speculative fiction novel definitely is a process.
Those in 'product engineering' or 'web development' might like to talk about iterations, or the unavoidably evanescent 'phase 2' or 'fast follows'. But when you are writing —at least for me— you iterate so much that it is very difficult to state if any part of the book is truly stable.
I'm pushing things a bit there, because there are definitely some parts of the story that feel solid. And have been for at least a decade. But I've given up trying to say for 100% certainty. Because I have always encountered occurrences that suddenly upturn parts of the story and need to be refleshed out again. The basic principal of the idea tends to remain the same, but the androids with brains the size of planets (or whatever you are writing about) fluctuate and change.
It's not just a book
The other part that I often forget myself, is that I'm not just writing a book. Because I've been working on this project for a long time. I've of course been reading a lot about how I might get things published. And the recent push (at least it literary circles) of self publishing seems fraught with lots of things to watch out for.
One of the key areas that is needed and necessary is marketing.
So, taking a leaf (or any arrangement of flowers) from my time in advertising, I've also been working on a site to publicise the book. And I've been developing the world so that there are other elements that would be interesting to those who might immerse themselves in the universe.
To be honest, the whole thing has been very much like working on a Web Game. Hence the reason why I've decided to start talking about it alongside my other 'digital projects' (as the print advertising world used to call anything using computers).
Oh, and the funny thing for anyone who has been paying attention to what I've talked about above... that this project isn't just one book. The idea is (currently) planned for three books. But those other two books will likely only see the light of day if the first one helps me get into a position to spend more time writing. Although the fact that my kids are older now, might help a little bit too.
This is an image I put together using Photoshop and then animated using AI. It is a visualisation one of the scenes from the book, which I'm using as part of the marketing site. It shows Worth talking to Distance over their FarTalks.
Pebbl
This was an idea that first jumped into my head in 2007, and yes, I'm still working on it (and no, not consistently). It was borne of the idea of having a web-based puzzle game that was split into screens/pages. And I would be able to get the system to a place where I could release new screens over a period of time.
Premise
The idea was simple, the execution perhaps less so... (as is the thing with most of my game ideas). But here it was only because I wanted to build things in a particular way.
And as usual, things always hit into issues with polygons.
Back when I started, there wasn't a huge choice in how to do a "web game". There were a few libraries just starting out to help with different elements of game creation, but no real frameworks.
In fact I started by building my own cavas-bases engine, from scratch. It ran riddiculously fast, but had very few features. It did at least have raycasting for the shadows, but as usual, I again hit into the issue of needing polygon-based collisions.
And (foolishly, or not) I couldn't bring myself to not use complex collisionary shapes.
The idea...
The thinking behind the game can be summed up in one quote.
"A lazy 1% progress bean causes the destruction of an entire progress bar —by sleeping in— which means the game fails to load. The other 99% progress beans are sent flying around existence by the resulting explosion. Bean is set the task of recovering all 99 other beans, so that the game can finally load."
The resulting game would be a simple platformer/point-and-click (without any real platforms, and in the world of touch screens) where Bean meets a number of strange scenarios and other beans. He has to persuade, tempt, bribe, abscond, bean-nap, serenade, ask, convince, and essentially find a way, to get the other beans to come back, one by one to the progress bar. Repair the bar, and then continue the loading in order to save the day.
Features & Learnings
This project was the clostest to a normal website, compared to my others. So in that regard there hasn't been much different to implement. However, there have been a number of things needed that I have had to learn —many of which were not immediately apparent.
1. How to handle mobile — at first this had never been a question when I started. Mobiles were thin on the ground in terms of ability to run anything, let alone games. However that became more and more of an issue as time passed on. The foremost on this issue has been device size and what control system to use. The answers have been: design mobile first, and on-screen joysticks.
2. How to handle conversations — I was always aware that I would need to do this. But I didn't realise how complex it could be. I'm now quite happy that I've found Ink (Inky). Which does a lot of what I needed as a game's creator. It just needed some bespoke coding around that format for me to be able to tie it into the game.
3. Inter-tab communications — This was the one that was unexpected. But when developing a web-game, especially one that autosaves, multiple tabs become a problem. This has led me to develop my own inter-tab communication via postMessage and localStorage. Which works quite well... Althoug I'm sure by the time I come to launching the game there will be a full-blown intertab native API.
Canvas (Vanilla)
As mentioned, I constructed a very basic canvas system for rendering Pebbl back in some of my early iterations. One of them can still be viewed here.
There was then a long period of time where I couldn't work on the system. So that version lapsed. And... better options appeared.
Phaser
I started tinkering with Phaser for other projects, but quite liked its simplicity. So in the hope that I could give me a speed boost that might get my game off the ground, I started to build a version using that.
It actually allowed me to get to the nearest the game has ever been to being workable. It even supported mobile. But... things again stalled, due to my time needing to be elsewhere, and projects being very awkward to pick back up after having worked on a number of other things in the interim.
Unity Experiments (3d)
The engine I attempted in Unity 3d —before Unity had a 2d offering— took me ages just to get working. So I soon decided to give up on Unity as an option. The learning curve was steep, and the outcome was not what I had hoped. Much of this came from forcing a 2d game in a three-dimensional world though (and battles with friction... who hasn't battled with friction, at least once?).
I kept these videos just to show, that I did build a full engine in Unity, and it also proves just how many times I've created the Pebbl Game engine —and in many different guises. These two videos show experimenting with the Bean character.
More Learnings
I've always used Pebbl as my learning ground. It has helped me find out all sorts of things that have gone on to inform my professional work. I am not kidding either, just a sample of the things I've learned whilst working on Pebbl.
Canvas API
WebGL
Unity
WebSockets
Node/Express
Touch/pointer handling
All sorts of Polygon/Geometry mathematics
LocalStorage
Multiple different Auth solutions (Firebase, Passkeys, Token auth, Passport)
Reverse engineering the switch controller feedback, so that players can connect and use their switch controllers.
One of my favourite experiments with Pebbl has been the use of setting up a RTCPeerConnection using WebSockets for the handshake, and then operating communications between a player's mobile and desktop (or other device) via their local network. This allowed me to turn their mobile touch-screen device in to a controller (and auth/passkey fob). And just goes to show how far web-based technologies have come.
Mote
This is a game idea I had around the time I was working at the Tate Gallery,
although I realised that the idea for it had come from when I was much younger.
Again, the premise is quite simple, but the execution... is inordinately tricky.
And again, I didn't want to lower the scope. So here we are, much later, still
not having even a demo up and working. But... I have still been pushing the idea
forward, mostly as I've learned more and more about how to handle cloud-based
servers that can handle this kind of open world.
Premise
The idea is three fold:
The player can move across a dertministically random infinite 2d space (inspired by Elite 2, not Minecraft or No Man's Sky). Like a dust mote flying between island spaces (static asteroids). But only as far as their power source will let them travel before it needs to recharge. They can land on islands and construct additions to those islands from component elements.
The player can collect elements/materials from different islands that are formed from different colours. Colours can be broken down into constituent components. Different colours do different things.
The player shares the space with other players. The aim is to be creative, build out your islands, using collected elements and discover how you can create new elements by using your HexEditor.
Island test
These landscape tests (or Islands, as the game would call them) are running in an older version of chrome. Using a combination of Pixi js and Pixi lights. I generated the normal maps dynamically, because the islands were randomly generated. And the outcome was pretty good, the only downside was juggling keeping things looking good and keeping them performant. The normal maps, whilst not 100% accurate, didn't need to be —because the landscape was so rough. It was generated using Pixi's filters that allowed me to colour and blur the shapes, and I could build up highlights and lowlights in the right colours for the lighting system to work.
I can recall each part of my history that has had an additive effect on my ability as a developer.
Graphics
My early days of coding were definitely heavily influenced by what code can do for graphics.
This is because I’ve always been heavily visual in the way that I do things, but also because I started out with an ZX Spectrum and Amiga 600.
With the Spectrum I had to learn the correlation between binary and the screen render.
But I did most of my learning was with the Amiga, with the likes of Assembler, AMOS, DPaint, BlitzBasic, and building mini games and numerous graphical demos.
Here I specifically learned the correlations between memory and different parts of the computer, even being able to fabricate a high resolution version of the screen with more available colours than typically offered by the Amiga. This was done by combining binary frames of two screens together.
I also learned the basics of sprites and how to achieve pixel environments, and animated graphics. At this time I also taught myself languages beyond BASIC and ASM like Pascal.
The Amiga and AMOS still holds a special place in my heart in terms of my computing history. Mostly because it had the best set-up (even compared to modern equivalents) for combining coding with graphics in a quick and easy to run environment. And… still also allowed to break out into low-level ASM routines.
College
After working on many of my own projects at home, without much programming help from school (they did have an obsession with teaching Word Processing however), I moved up to college. Where they did have a programming course that had stated it would be teaching C++ (something I had been very interested in).
So I chose that as one of my options. Unfortunately, when the course started they didn’t teach C++, instead they decided to teach Pascal instead. I did at least learn something of databases in this course (bad ones), but mostly spent time learning about Graphics dlls in class (as they were easy to access using Pascal). I continued to teach myself C and C++ at home instead.
I had one of the weirdest experiences as a result of an exam at this college. I was marked down for finding more issues in the example code than was recorded in the marking notes, and was told my written answers to some of the questions had been too technical. “Too technical” on a computing course. When I questioned this, I was told the exams were often marked by people without technical knowledge.
HTML → HTML5 / CSS → CSS3
My coding style definitely changed when I switched over to PC and started experimenting with the “Interweb”. Things became less low-level and more about files and scripting.
CSS didn’t exist, JScript and VBScript were doing battle, and even creating a page layout was a learning curve. Everything became DHTML, and fullscreen page transitions were never a good idea.
I still remember well the brilliant access that HTML gave for learning, an example of this was first discovering the <ul><li> elements. All I had to do was inspect examples of bullet points I found around the web to learn how to use them.
It was a great time for discovering —especially as I was gaining knowledge as things were invented/released. Which made it more manageable than the sprawling complexity presented to those starting out now (especially CSS). However, you did have to build nearly everything yourself. Which meant a lot of boilerplate/setup if you wanted to achieve anything.
JavaScript (:all_the_things:)
As ECMAScript started to become the language of the web, my focus on it grew, across various variants. The different flavours that existed between browsers, the versions that existed as sideline languages (in Flash, Photoshop, etc.). It was a revelation, because previously I had only ever been able to run my code locally. And now I could publish it to the web, or use it to interface between systems.
I did tinker with VBScript, but it just didn’t hold up to what JS could do. I know there are lots of detractors out there who have a dislike of JS (some have good arguments, others don’t). But most of them being coders who like stricter, more locked-down languages. But for me, it was exactly EMCAScript’s expressiveness that pulled me in. It was closer to sketching and being artistic.
Clearly, JS —if used improperly— can get very messy. It’s power can undo itself. But… as long as you understand what the language is capable of. E.g. you know the difference between deep and shallow cloning, you know how references work, you understand closures, you get how functions can act as first-class objects, how garbage collection works. If you understand that, you can build a great many things, very quickly and safely.
ActionScript 1, 2 → 3
When Flash released AS3 I fully switched some of my projects away from the HTML world over to Flash —because it allowed me to seriously mix my coding ability with graphical handling (just like I had had before with Amos). You have to understand, at the time, browsers could hardly move layers, let alone handle effects.
Whilst I did learn a lot from that. It was definitely a hard lesson learned, when Flash essentially died as a technology. I did look at Silverlight, but thankfully bounced right off that one.
The disappearance of Flash set me back quite a bit in terms of my personal projects, and to this day, I will always choose native tech over proprietary or third-party if I can. Precisely because the browser environment has been stable enough to be progressively enhanced for years (and other things, well… haven’t been).
One project that particularly got hit was ALWTM, I had a full engine built in AS3, that could handle parallax scrolling, weather effects, and keyframe animation. It is an engine I can’t even run myself these days —let alone put into an online setting.
PHP3 → PHP7
I spent a long time working mainly with PHP in my professional output, and I learned a lot in dealing with this kind of server-side code. The kind that evaluates and doesn’t retain anything in memory. Those learnings are less relevant to me today, but it was still an interesting journey.
The main thing I recall from the world of PHP was this split love/hate thing. I loved the fact that php.net was a great resource for information, but it had to be, because PHP’s choice over method naming and parameter order was a mess.
Because of PHP’s nature, I also learned a lot about securing API calls at this point. This was the period in my life where nearly everything I built was an online form.
There were A LOT of forms…
Programming Evolution
As most of my projects continued to be JavaScript focused. My learnings mostly evolved around that, although I did also have secondary streams of learning Python and Bash.
I have specifically been aware each time my coding ability has jumped up. And it has always been down to either a realisation learned from a personal project, or, learning from someone else’s project. There are some key points that I’m sure I share with other coders, mostly thanks to the power of the Internet.
Power of Events (and pub/sub)
Ever since triggering my first events, simple event listeners in the DOM, I’ve been keen on the simplicity of how events can disconnect systems in a good way to separate concerns. But there has also been a lot of learning of where events work (usually one way communication) and where events cause issues (often, two way communication situations, or things where timing is order crucial). It is still my favourite choice to start with, even if I might perhaps elaborate into something else beyond events later. Mainly because they are so easy to understand.
“The main thing I’ve learned to watch out for with events is that many developers never think about teardown. They are happy to set up events, but use an app for an extended period of time, and you’ll see that the first place to check for memory leaks is event listeners.”
Power of Selection
My first foray with the power of selection was clearly jQuery. The fact that the idea of using CSS selection to target elements had escaped me (and many other devs) was something that intrinsically changed my thinking. From that point on I have always been looking for where there are crossovers between technology, essentially where you can get something for free (from something that already exists).
The day I wrote my first integration with Sizzle (not jQuery) was the day I laughed at myself for the countless times I had written the getElementById method in the past.
Power of Selection (v2)
A second collision with the power of selection, this time with pairing data to a selection was clearly D3. This leveled up my thinking in terms of how you could build things directly from state.
This led to a period in my life where I tried to tie d3 to everything. But I quickly found that, whilst powerful, it did lead to obtuse code unless very carefully dealt with. The idea of pairing data with selection however is still a powerful one, and I use it wherever I can.
Power of Streams and Lists
I came to streams later than most of the other tech features I know, and I still find them obtuse in places, but they can be perfect for many use-cases. I can’t pinpoint where exactly I started learning about them. But I think it was mostly with my investigation into Node. Lists however have always been with me, from some of my very early projects lists, linked lists, arrays have always been powerful tools that can be used for so much more than just holding values in order.
I once built a game called d-tron, which was a tron clone (with depth), but rather than drawing free lines on a canvas-like world. It involved storing used positions in a massive multidimensional array. Not really the best use of an array, but… it worked.
Power of Complexity
This was more of a lesson learned in how complexity should be avoided, especially when working in teams of developers. I was given the chance to help build out a company’s marketplace site, replacing their old Ember-powered site, with a React version.
This was in the early days of me taking on the title of “product engineer”, and if I’m perfectly honest, there were a number of requested tasks that went against my usual zeitgeist. There were some implementations that I still disagree with to this day, but… on the other hand, working there definitely changed my thinking —and many approaches —I would never have taken by myself— are those that were recommended to me by that team.
One choice I disagreed with then, and still tend to disagree with, is using a framework to build a website frontend. I only tend to go this route if the client specifies, and in this situation they specified that they wanted a React site.
I understand the draw for a company to use an off-the-shelf system, it helps with hiring and developer’s cognitive adoption, but —as many are finding out these days, it adds a lot of weight to what should otherwise be a lightweight system.
Because I had just been searching around for jobs prior to landing this one, I had been specifically increasing my knowledge in areas of React, Angular, RiotJS, Vue, etc. Because that was what was expected at that point-in-time.
React always seemed over-the-top to me, but I was swayed by two key ideas, having a singular state tree that the entire site reacted to, and the ability to time travel, allowing us to replay a user’s journey through the site. Two very powerful ideas.
Unfortunately, for anyone who knows what it is like to manage a complex React site, I learned a lot of lessons very quickly in this site’s implementation. I followed the then recommended practices, despite my own reticence towards some of the recommendations, which led us to a site that had a number of good bits, but also a number of pitfalls.
One area I very much definitely wish I had done differently, was using Sagas. Whilst Sagas are very powerful as a concept, it is that power (and major difference to how most other JS runs) that caused issues not only for me, but also the rest of the team. Making small changes to that area became tricky, and people needed to switch out their thinking completely (compared to the rest of the site) to understand.
I came away with a number of good ideas after that site however, and how developer cognisance in teams is more important than the power of the system you are building.
Power of Automated Tests
I was quite a late comer to realising how powerful automated tests can be. Mostly because I was introduced to the world of unit tests before integration or e2e tests. Unit tests, unless covering a unique or tricky area of functionality, are mostly pointless and a waste of time. Integration tests, and even better —end to end tests— can be complete life-savers. And if you are clever enough to also get these tests covering your production in a live context, they can also keep you aware of live issues.
Once I had been working with a number of production systems in a live context, I began to rely more and more on automated tests to give me confidence —especially when working with a “startup mentality” which typically means needing to release things quickly. But… it became important learning the best way to build these kinds of tests, and that’s an area I’ve vastly improved on over the last decade.
You do have to choose your tests well when writing integration/e2e, because they are more costly to put together. But I also learned that if you can build in a recording system into your tests, one that is able to capture live or staging data when required, that can help a great deal in keeping things up-to-date.
Now, if I were to build any production-based system from scratch, I would definitely start out designing my data stores and code implementation so that it benefits e2e testing. I know there are a number of people who state you shouldn’t code your production logic towards tests (i.e. your production logic should be completely unaware of tests). And whilst I know what they mean, and I would of course avoid having conditional checks in my production logic that reference test information (e.g. if(env === 'test'){...}. What I mean is to design your system so that test data and production data can live side-by-side in the same system, without fear of it leaking to the public. That way, once you’ve spent your time working on these expensive tests, you can then put them to work at almost any environment level (dev, staging, test and production).
Power of Pragmatism
“Being pragmatic” is a term that seems to be overused in the startup world. And it was something it took me a long while to understand. I spent a long time believing that I was slow —at least in terms of getting code live (despite being very quick in interpreting and understanding code)— and that I needed to find ways to speed up. But, I then realised that this was wrong, I wasn’t slow, it was an incorrect interpretation of events.
There are those that do “being pragmatic” well, and it is difficult to be consistently pragmatic. To do it well means you have to have a good idea of the bigger picture and the small detail at the same time. Because you need all that information to make the right choices.
But this is where many do not get it right, they only ever see pragmatism as finding the quickest shortcut they can make. This is not correct, being truly pragmatic means that you can sometimes choose the fastest route, and other times you should choose the slower one.
Being pragmatic doesn’t mean ‘move fast and break things’
My learning is that if you only focus on whether or not the code got out the door in X amount of time, then yes, the rushed version of pragmatism seems great.
But… what I learned, and I believe those that worked closely with me also noticed, was that the code I got out the door —albeit slower than some of my counterparts— vary rarely came back to bite me, or anyone else for that matter. My version of being pragmatic, was to make sure things were stable, and nothing to do with speed to release.
And this had an interesting effect, whilst there were teams that spent quite a bit of time fighting fires from previous work, whilst also trying to get new work out the door. Myself, and the team I worked with, rarely did. We were usually free to pick up new work and properly focus on it. And so, after time, I began to realise that my being slower was actually a benefit.
That’s not so say there have never been times when I could have moved a bit faster, perhaps not run certain tests that weren’t likely to fail —but my takeaway is that this kind of speed up shouldn’t be called pragmatism… for me, it was more about increasing confidence that the release wasn’t going to break things. Which I found I could definitely increase when there was more time for building automated tests.
Power of Readability
This is possibly one of those things that has only come about due to me getting older, my eyesight getting worse, my brain starting to move through the world with fleeting moments of forgetting on why it was moving somewhere in the first place. I say this, because I used to secretly roll my eyes at people would would complain about my three letter variables, because I knew what those variable names meant.
However, one thing that has been steadily true —and seems true for most developers— is that when they look back at their own old code, they are shocked at how bad it is. You can take this one of two ways:
The way humans handle coding is intrinsically bad.
Developers are improving all the time.
And… one of the things I really do not like about my very old code is the terrible variable names. Since working at Trouva I have taken on quite verbose, and I’m proud to say, accurate variable and function naming.
Mainly because I found the better I got at this, the easier it was to get people to review PRs (and the easier it was to pick work back up after a while). And as such, there was a very visible and clear feedback that I was getting better at it. Another good test has been, that when I look back at my more-recent-but-still-older code now, I find the variables and function names cause me no problem at all.
Power of Automation
This might seem an odd one, and it isn’t something I’ve learned myself as such —I’ve always seen how coding can speed things up. But one thing I have very definitely learned is that you shouldn’t underestimate the power of making a few quick fixes. Those fixes or changes can mean the difference between someone with a repeat task taking days to complete it, or getting it done in minutes after you’ve improved the system.
These kinds of ‘quick changes’ tend to get lost in amongst the sea of upcoming features, bug fixes and technical debt. But sometimes they can be more important than all the above, especially if administration UIs have been left to age and stagnate.
I have made small changes to UI panels that have greatly improved the moral of an entire team of admins, or added a bulk ability quickly to something that had been a mind-numbingly manual process, speeding up the work tenfold.
I think, because development always seems technical to those who do not have the technical know-how, that any change always might be big —or at least it can appear that way. But often people get things round the wrong way. Something they think might take ages, actually doesn’t —and vice versa.
This learning however needs to be tempered. I believe that it is the responsibility of those with the power to do so, to be careful that you don’t end up completely replacing someone’s job. Just because you might be able to fully automate a task, doesn’t mean you should.
For example, the current rush of the-less-than-sensible, to look at replacing numbers of workers with AI —in the name of productivity— is blind and not future-proof. Automations should be there to help people, not replace them. A human in a role will always be better than an AI, especially currently, but even if that AI becomes all-powerful.
This is true for a number of reasons, but the major one is of companies taking responsibility for their effect on society. Which unfortunately, with the likes of Trump and Musk running all-things-American (to take a few examples), is getting fewer and farther between. The more companies act like “darwinistic entities”, the more normal people will lose out. You can still achieve that highly coveted productivity, but with humans properly at the helm and in control.
Thankfully, anyone who thinks they can replace entire teams with AI agents, at the moment, is in for a surprise. They can’t. The illusion might be there for anyone that doesn’t look at the details. But, as any developer knows —the errors, bugs, and lawsuits are in the details.
Power of Hiring
This is something that there are entire industries around, and for a while, progress seemed to being made towards a better way of hiring. But lately, due to (perhaps pandemics and…) massive conglomerates doing their thing of buying companies up and destroying their uniqueness; things seem to have gone backwards.
I have personally gone through a number of different interview processes, having worked both PAYE and as a contractor, I’ve been headhunted, I’ve been recommended via word-of-mouth, I’ve gone through many interview processes: good and bad.
For my very first interview, I was on the hiring side, rather than being hired —which was interesting. I’ve also done the “whole interview day” process for a good company, and a bad company. I’ve done the usual —and often quite terrible— recruiter processes, and I’ve also gone through very well-designed processes, like the kind Hired.com used to offer (sadly, no more).
My main learning from this is that only you have the power to make people understand what you can bring in a hiring context, you can’t rely on anyone else. Hiring companies that realise this, and give those looking for a job the chance to show off, in their own way, are brilliant!
Unfortunately, because of human momentum, and the race for companies to get ever bigger and take over the world —whatever brilliance or uniqueness they might have started with gets lost in a swirl of bland, careless, number-crunching. And… you are left with having to rely on random recruiters, of which it is pot luck as to whether they even understand you or your business (if you’ve read this far, then, “no, not you”). And let’s not forget the “please submit your cv” style monstrous websites, that typically don’t even have the category “web developer”, and instead lump you into the category of “You use computers, right?”.
You might think it is only that last part that is bad, what does a category matter? but no, “upload your cv???” in this day and age. A single sheet of paper is not going to do anyone any justice whatsoever, and it is ridiculous to think that it ever did. If dating sites can take in a swathe of information, and apply clever calculations and even psychology to find a partner, why on earth do hiring companies do such a bad job?
This is the reason why, every time I come into a hiring situation, I build a website specifically for it. And this is that website. This allows me to rely on LinkedIn more heavily, as people can discover me and work out that I am a fit for their business or client. Hopefully the effort I’ve put in will be understood and I’ll be able to find a good position, within a friendly and intelligent team again.
One last thing to say on hiring (if you aren’t bored already). I have never yet come across a company that did hiring better than the original Trouva. I obviously went through the process on the interviewee side, and became the interviewer whilst there. So much effort was put into two key things: making sure the potential hire was a good culture fit, but also making sure that person wasn’t setting themselves up for failure. We hired people who clearly knew their stuff already, but also people that needed to learn.
My experience was in the product department, and every single person was always a force (or became a force) in whatever they were brought on to be —and the key to that was everyone, everyone (without exception) had respect for everyone else. I worked with a great mix of people, and everyone was keen to do a good job and make sure to help anyone else that needed it. Sadly, those days came to an end, again due to bigger businesses buying things up.
So my other repeat learning from this has been —if I ever get back into owning my own company— is to never grow too big. Find the right middle spot, give people a balanced job and a stable future, and then build brilliant things together. Hopefully this constant drive to be the next fool with too much money will be seen as the fad that it really should be.
Power of Co-location
Co-location is the practice of putting related lines of code so that they are near to each other. Whilst that might sound obvious, it is actually more difficult to do well than it sounds. if you co-locate everything together, you get no modularity or separations of concerns. But if you separate everything too much, you lose readability and fathomability.
So my learning in this area has been to find out where co-location works, and where it perhaps doesn’t. One of my favourite findings in this regard is to co-locate setup and teardown.
You may already do this to a certain degree. I mean it would take a truly wild-eyed programmer to put their setUp methods somewhere miles away from their tearDowns. Usually, you’ll find those kind of methods right next to each other. However, in certain situations, it isn’t as easy as just placing two class-level methods near each other. Take using setTimeout for example.
In the past I might have written something like:
doSomething() {
this.tid = setTimeout(() => {
// trigger something else
// after a delay
}, 100);
}
// ... potentially lots of other code
somethingThatNeedsToInterrupt() {
if (this.tid) {
clearTimeout(this.tid);
// trigger the inverse of
// the setup code
}
}
This is all fine, but methods like somethingThatNeedsToInterrupt can be spread all over the place, meaning that your code isn’t co-located.
You can get around this by creating specifically named handlers/methods for the timeout callbacks, e.g. onTimeout and onTimeoutClear. And sometimes that can be preferable (as long as they are near each other in the source).
But… if you’d prefer to keep your callbacks anonymous and temporary, you can do something like this:
doSomething() {
this.doSomethingTearDown?.();
const tid = setTimeout(() => {
// trigger something else
// after a delay
}, 100);
this.doSomethingTearDown = () => {
this.doSomethingTearDown = null;
clearTimeout(tid);
// trigger the inverse of
// the setup code
};
}
A variant of this might just return the tearDown, rather than attach it to this, it depends on your style and what you are doing.
But… either way, when you need the interrupt handling, you can just call this.doSomethingTearDown?.(). The benefit of doing this is that both bits of code are co-located.
When someone adds something new to the set-up, it becomes much easier/clearer that something needs to be added to the teardown. It also means you can avoid capturing temporary variables like tid in a more permanent manner (just to deal with the clearing).
It is a simple learning, but has seriously improved being able to find this kind of related code, and keep it up-to-date. Another place this works well is when setting up events:
This is great for places where you might create a number of event listeners (perhaps on multiple descendants). It keeps the listener references caught in closure, rather than exposed and attached to the main context. And… it keeps the setup and teardown right next to each other, without any risk that someone might put a method or other code between.
Power of leaving things late
The whole while I was growing up (I did, trust me), I was constantly told, by many sources, not to leave things late. Doing things early and you get ahead of the curve. And that is very true, but only in specific contexts. It is, for example, definitely better to file your tax return before Christmas —and if we’re being specific, definitely better to have filed it before Christmas of 2021.
In tech however, I’ve found that holding off on certain things is definitely the best strategy. Oh, and before you start telling me this is obvious —A, B, and C all state that delaying X, Y or Z makes fiscal, temporal, perceptual or even geospatial sense (not standing back up too early under an open cupboard door for example). You have to keep in mind I probably haven’t read A, or followed B, or gone to university with C. It wouldn’t even matter if I had, because my brain is very definitely predisposed towards being an empiricist. It has to experience X, Y or Z first-hand, mainly so that it knows what makes them tick, and… so that it knows what to do if they ever decide to break and stop being functional letters of the alphabet.
My head worries about such things, you see.
Leveraging infrastructure is definitely one of those cases.
Do not bring shiny extra or new infrastructure into play before it is absolutely necessary. Try everything you can first, to avoid such things.
But… there is a caveat, don’t box yourself in, so you can’t bring those nice shiny new things later, perhaps after everything else has failed.
The reason for this, put simply, is that the more of the rest of everything else you have built —if you have done it well— makes it easier and easier to slot in the infra at the end. If you build in a future-guessing manner, you very likely won’t do it very well.
This hasn’t been a learning from my professional work really. Because in professional terms, I’ve always been avoiding adding complexity in infra as much as possible —because infra, when someone else is paying, makes me nervous. Especially cloud-base infra, because it seems that money does grow on clouds.
It makes me nervous when I’m paying too, but usually for different reasons.
I’ve learned what I’ve learned from my personal projects because if you introduce too much complexity (a.k.a. things that slow you down) early on, the project collapses. It wouldn’t, if you had all your time to spend on it. But for personal work, you never do.
So these days I’m extremely careful with whatever I decide to add to a project, and usually that starts at the point I think I need a database.
I will do just about everything I can to keep things memory resident, for as long as possible. This allows you to build out structures, change them quickly, reset the server, clear out information, all at the speed you can code. As soon as you start to formalise schema and other things, is when you start to cause yourself headaches. Which surprisingly (or not, if you know me) is my main argument against TypeScript. It is best to be as flexible as possible when you start, like working with hot metal, and then —later on, when things start to solidify— you can start adding your formal schemas or types, etc.
Some people may disagree with me.
But I’ve seen the evidence in my own projects. The ones where I could stay fluid are the ones that finally made it.
This is an experiment, in of so much, that I've never published a book before.
And from my experience in advertising, and the research I've made recently about self-publishing,
it seems that *marketing* would be a good idea.
for my book.
As such, I've been building a “marketing” site
But I've tried to design it as if it is coming from Onaia (the world that the story is about).
This has actually led to building numerous mini experiments, essentially testing out newer things
that browsers can do. I say newer, but this could cover anything that has appeared in the last 5 years.
It has also led to me experimenting with other things outside of developing, e.g. new abilities in
creating graphics and video.
One example of a new learning is the fact that ffmpeg can be programmatically
controlled in terms of layers and effects. I had really only seen it before
as a conversion and utility command, but it can be used to be creative.
Take this command, for example, I used it to create a video of a bright
flash that occurs in a video on one of The Institutes pages. I built it
from images that I’d created in photoshop, and I then used ffmpeg to
crossfade the images together, to give the illusion of motion.
These mini experiments (or POCs) have slowly been amalgamated together into
a site that I've called The Institute. Unfortunatly, I called it this name, way
before I ever realised that Stephen King also had something similarly named :sweat_smile:.
And for my brain, changing it was not an option. My Institute is a nice friendly
place, a place for learning and puzzles. Other Institutes can be whatever they
want to be.
The world of TEOWAD is purposefully a mix of old-world and new, there's nothing
really groundbreaking in that steampunk-like concept. But it is one that I
immensely enjoy working on and bringing to life.
Newspaper Articles from Onaia
This 'newspaper' interface actually harks back to one of my earliest advertising projects
that I worked on at Kitcatt Nohr, a magnifying-glass motif for Lexus (I think).
But here, the magnification is actually useful, and allows you to read more of
the backstory about one of the book's main characters. [ Click to play ].
This system is built using Hammer.js and Pixi.js, along with bespoke WebGL filters to achieve
the lighting and warping. It hasn't had a lot of optimisation work put into it yet however,
so you might find —especially on larger monitors— your laptop/desktop fans might spin up. This is
because to achieve the needed resolution for readability, I've had to use large textures.
This is fixable with more time however, I just need to be more focused with the inputs
and outputs of the filters.
The Institute's Library
This is a realistic book viewer that I've developed as a nice way to fit in more
of the information from TEOWAD/Onaia —that I wasn't able to fit into the book. And again,
it has historic value for me, in that it is a similair looking interface (although
vastly different coding) to a system I helped build at the Tate Gallery called
Slidebook (which allowed to peruse through artist sketches as if in a sketchbook).
One of the hardest parts to get working well with this interface was the mobile
handling. The current implementation does work, but it needs improvement. Just in
case it isn't obvious, the whole system is mostly powered by CSS and 3D transforms.
And... just to draw attention to something that most will overlook (as I'm sure
you have seen an online book effect before). But... have you seen a version
that allows you to flick through the pages at the speed that this one does? Try
visiting the book and just holding down the right-arrow key. You can even change
direction half way through the tirade of pages. Quite fun.
(Legacy) FarVision Device
This is another device, taken straight from the pages of my book, made real for
people to play with on the web. Its called a FarVision device, and it plays a pivotal
part in the story. Not that you need to know this, but this is actually a legacy model
—as far as the characters are concerned. The device allows for signals to be picked up
from other FarDevices. If you want to activate it, just aske the Curator to activate
it for you (I haven't yet had time to add the proper buttons).
This is one of the most audacious things that I've tried to build on the web to date.
Not as complex as a game engine, of course, but it definitely has utilised all of my
skills to get it even part working.
What has been involved so far:
Video generation and editing.
Photoshop, and AI Prompting techniques.
Creating a HTML Video component that can seek/play backwards, accurately.
Aligning said video to a 3D environment in Three.js.
Handling touch interactions.
Creating particle shaders.
Sourcing and editing audio.
Mobile and Tablet testing, aspect aligning and tweaking.
Orchestrating the UI to work with the Node.js/Express backend.
Developing the backend API that is a hybrid between Ink.js and Xenova Transformers.
The API allows for questions to be asked about the device (and TEOWAD world) and the "Curator" will try
their best to answer.
Be warned, I am running this service on a shoestring, so it most likely won't handle
many users at once, and it will be slow due to being low powered. But... it does show off what I can do.
It also achieves not a bad "understanding" of what the user is asking, as long as the Ink story
has been prepared with similar questions.
Essentially I use a feature-extraction pipeline, so that I can train a MiniLM
with embeddings based on the questions already in the Ink.js story. Then, if
a question matches (and the rest of the state in the Ink story align), the user
gets given the pre-written answer.
This is just the first step of the plan. I hope to use this system to power
Pebbl (as the conversation handler). And that means giving it call out capabilities
to larger more expensive LLMs that can feedback when a user asks a question that
we don't have an answer for. I obviously won't allow a live link to such external
systems. More likely, the app will gather the questions that failed for the day, and
then post them off to the LLM at a controlled rate to get answers. So in theory,
over time, the system learns to be better —whilst not costing the earth (in more
ways than one).
Reflections
One thing that has taken quite some time to get right, is the reflections
you can see in the Exit game.
Harmsway
I don't count this as one of my projects, even though it is probably the one
I've worked on the longest. I see it more as a testing ground for experimenting
with different code. I've recreated a testbed for this game so many times I've
lost count. It isn't really very unique, but it is a game that very easily fits
in with using a 2d physics engine.
You are a blob —inspired by Putty from the Amiga— and you have to stay alive by
dodging falling things.
This permanent proof-of-concept has taught me a number of things. From
how to pack and use spritesheets, to how best to integrate a physics
engine in a headless manner, to building admin editors directly
into the game engine itself.
Without this particular project I wouldn't know half of the knowledge
that I'm using today as part of my indie game constructions.
See below on the various admin systems I've built around this system.
Movement Tester
This admin is an example of the things that I build for my games. It is a
cut down version of the full editor that has been hardcoded to just load
one Harmsway spritesheet, by default.
This editor allows me to preview a character, to see how its animation
and movement combine with the physics handling.
One of the trickiest things I've found with 2d keyframe-animated games is
handling animation bridging. But this is because I approach it with a lot
of attention to detail. For me, there is nothing worse than poor or glitchy
animation, or slow reactions. It pulls you right out of suspended disbelief.
Unfortunately you often see both. Especially in games that have
clearly been chucked into some kind of form, without any care, built from shoddy
assets patched together. The quality of many games, especially phone-based
ones is truly shocking.
Bridge Tester
I realise, now that I've said this, I've set myself up for a lot of judging
on the current state of this example (see app).
But you need to keep in mind, the clunkiness (and bugs) of this current animation
is only because I haven't spent long on fine tuning it. The animation bridge system
I've built is very flexible and configurable. But the animations achieved are only
as good as the effort that is put into it.
The reason for that is that I've had to put this site live earlier than I
expected. But, be assured, I will be continuing to improve this.
This bridge tester is slightly different to the movement tester, in that it is
designed specifically to test a singular bridge between source and destination
frames. Try it out to see the kinds of things that can be built alongside web games
to allow them to be be configured and tested.
W.o.t.e.
Again, I've put this as an experiment more than an actual project.
Mainly because my aim for each of my main projects is to use the same
underlying game engine to power them all. This game, however, will definitely
need its own standalone engine. I may even opt to build it using pre-built
engine. For now however, it serves in a similar way to Harmsway, to be an
experiment that I can test things with.
What makes W.o.t.e. different to my other projects is that it is currently
using Three.js. When you see the demo, you'll understand why.
Shadows & Light
This will get more of an explanation when I have more time and am not looking for work.
Click to view the very old example. I have newer POCs, but none of them are in a fit
state to host right now.
Web-based game controller
This experiment is something I hope to use with my Pebbl game and perhaps
some of the others like ALWTM. It essentially uses WebSockets to setup a
local connection between a user’s device and their laptop/desktop-computer/smart-tv.
But that connection is then run over their local network.
So far, my testing has been brilliant, far better than anything I've attempted
before. Mainly because the speed is precise. However, I'm sure I'll find issues
with it as more people attempt to use it. But one of the major benefits I've
found, in addition to giving people a touch based controller, who might only
have a phone or a control pad that doesn't connect to their computer, is that I
can use the phone's existence as a login to their account. Now I wouldn't
recommend doing that for anything that needs high security, bit if we're
talking an online game, I think certain players will appreciate the speed of
getting up and running in the game. Rather than having to mess around with
passwords or passkeys. If they can just pick up their phone, follow a link
or 2d barcode, and suddenly they are connected to the game. That will be
pretty good.
I have yet to build this POC into my spiraldust game engine yet (polydust),
but once I have, all my games and demos should be able to benefit.
Do androids dream of vector figures?
I have been working towards this particular vector figure for some time.
Building numerous tools to help me create his particular poses, flows and animations.
This is the character that will serve as the main protagonist (and the antagonists)
in my 'Escape the 11th Terminus' game. Initially I started out quite simply,
then I jumped to extreme complexity, and now I've found a place somewhere
in the middle.
My initial trials in this area had tools that were based in using
force-directed graphs. The reason for this was d3 was easy to use,
and gave me nearly everything I needed from a responsive skeleton.
However, there was some awkwardness, and getting the precise control
I wanted often involved the character's nodes being locked in place.
This was fine, as long as they stayed locked, but sometimes I needed
to release a lock and then the limbs would do a thing I like to call
'snap and jangle'. This implementation also didn't help me get to the
'design' of character that I wanted, it more allowed me to do a conventional
stickman, but not the 'fire-escape' style I wanted.
So, I then tried to move on to 3d systems with springs and constraints,
with the likes of box2d and matter. This again, helped in some areas, but
also brought with it the entire headache of keeping control over
the crazy world of physical simulations. And... I also learned physics
simulations don't tend to like trapezoids.
So, undeterred, I started looking into Inverse Kinematics. And boy, did I
underestimate the complexity of that field. I would usually get somewhere with
one limb working as I wanted. But once I started bringing other limbs in, or
trying to get to more complicated postures —things got weird.
I thought —at one point— that AI might be my saving grace, at least in terms
of IK. But, I quickly learned that LLMs really do take to mathematics like
an English Literature student. They know the grammar of everything, but the
value of nothing —to the power of i.
To cut a long story short—er, I've tried:
In-browser AI models that detect poses from images.
Numerous POC that control legs or arms.
Ragdoll constructions in both 2d and 3d spaces.
I've even thought about, but not yet tried:
Using gyroscopic detectors attached to myself (a.k.a. switch controllers), to record movements.
Using specially designed video AI, that can understand and generate figures in motion.
Hand animating everything.
But through all that, I have —despite how it may seem— been making progress.
To the point that I have quite a good system for controlling and animating the
character. It doesn't do everything I need yet, but it is in a good state to allow
me to tie in other plugins and features that will give me what I want eventually.
You can try a cut-down, ripped-out and placed-here version as a demo.
This editor is designed to work with some very specific figure data, which it would
be amazing (and a little creepy) if you had that lying around. So I've encoded some test
data into the system. You have to import it first however. So please follow the import
section (once you've created a new project), and then click to import the test data.
Keep-in-mind, this system works using IndexedDB, so it is completely offline. You can mess
around however you like, the data will be stored within your browser.
Over the years I’ve written a lot of code, but I’ve rarely formalised that code into official libraries or open source software. Mainly due to a certain level of perfectionism that my brain needs before I’ll count something as good enough. These two libraries are the exception. Whilst they aren’t anything amazing, they served as good solutions when I needed them, and they were easy enough to tidy up and put into github.
Foray
Foray is "for arrays", obviously.
It is a JavaScript library that enhances arrays with custom methods, allowing for flexible and efficient operations. A functional programming influenced tool, built with efficiency and expressiveness in mind. Foray works by encapsulating arrays and furnishing them with new capabilities, without compromising their original characteristics or performance.
It is specifically designed NOT to be a class extension of arrays, and NOT to muddy the prototype chain. It is just a method that you call to wrap an array, and it returns an extended API.
Odo is an object creator that decouples data values from their structural definition, providing a more efficient way to transmit data, especially over web sockets. By defining values as a simple flat array and describing the structure as offsets within this array, Odo makes it possible to optimize the transfer of information for different use cases.
I thought I might use this other section to house a small blog area, one that I can update with some technical articles —when/if I write them. Which, to be fair, like many blogs, has been sporadic at most over the years.
For now, I’ll just link to the existing blogs I have: