Motivation
I’ve been wanting to make games for a while. I made my first video game when I was 15 in high school, also using Unity. Obviously I’d forgotten almost everything about game dev, so I wanted to jump back in and relearn it from scratch.
My goal for this project was to see if I could make a full end-to-end game in a week. I needed to develop my chops, learn C#, and get comfortable with the tooling again, so I knew this wouldn’t be my magnum opus. I ended up finishing the game in exactly 7 days, and it feels really good to have actually completed a project.
Acknowledgements
I know people have mixed feelings about AI, but where I think it really shines is in use cases like this game, where I had a week and don't have the money / time to invest in writing music for it. I used Suno for the soundtracks on Hallways. I won't call it writing (it's not, really) but I did know exactly what vibe and type of music I wanted. I make EDM, and I could've made these songs myself but it wasn't in scope for the game, and honestly would have taken me a long time to get it to sound as good as what came out of Suno.
I also used only free assets from the Unity store, those were pretty nice. For the most part, the code was written by me except for the proximity cue, which was a complicated shader I did not have time to learn about. I knew exactly the vibe I was going for, which was to emulate the 'monster proximity' cue in the great horror game Still Wakes the Deep. If you haven't played, it's like a weird teal oil blot that starts spreading from the corners of the screen if a monster is nearby.
All that to say that as I get better / faster at making games, I'll actually spend the time to do these things myself so I can learn how they work.
Overview
I built a small horror FPS prototype set inside a damaged spaceship corridor. Glowing entities warp through the walls and attack from all sides. The player has very limited visual information, so survival depends heavily on spatial awareness, audio cues, and subtle environmental feedback.
The core loop is simple:
- Start a round
- Enemies spawn over time
- Player survives or dies
- On success, advance to the next round
- Difficulty increases
- Repeat until failure
What seems like a straightforward gameplay loop forced a surprising number of architectural and design decisions.
This post covers how the game works internally, the tradeoffs I had to make, and the lessons learned building it.
Core System Architecture
The most important architectural decision was separating the concepts of a run and a round.
These are similar, but fundamentally different.
I implemented two primary state owners:
RunManager— owns session progressionRoundManager— owns current round state
RunManager Responsibilities
The RunManager handles long-term progression:
- Current round number
- Difficulty scaling
- Generating round configuration
- Restarting the game
It does not care about enemies directly.
It simply prepares configuration and hands it off.
Example conceptual structure:
RunManager
{
int currentRound;
void StartNextRound()
{
var config = GenerateRoundConfig(currentRound);
StartCoroutine(roundManager.StartRound(config));
}
}
RoundManager Responsibilities
The RoundManager handles moment-to-moment gameplay:
- Whether the round is active, lost, or won
- Spawn limits
- Enemy counts
- Spawn permissions
- Round transitions
This keeps all round logic isolated and deterministic.
Example:
RoundManager
{
int maxSpawns;
int currentSpawnCount;
int enemiesLeft;
bool CanSpawn()
{
return currentSpawnCount < maxSpawns && roundState == Active;
}
}
This separation made the system far easier to reason about.
RunManager controls progression.
RoundManager controls reality.
Event-Driven Design vs Polling
A major decision was whether systems should poll state or respond to events.
Polling approach:
void Update()
{
if (roundManager.IsRoundActive())
{
UpdateUI();
}
}
This works but creates constant dependencies and unnecessary work.
Instead, I used events:
public event Action OnRoundStarted;
public event Action OnRoundWon;
public event Action OnRoundLost;
Systems subscribe:
roundManager.OnRoundStarted += HandleRoundStarted;
Benefits:
- Systems react only when needed
- Less coupling
- Clear state transitions
- Easier debugging
This became especially important for UI and spawning systems.
Coroutine-Based Round Flow
Unity coroutines became the backbone of round transitions.
Instead of trying to sequence logic manually, I let the coroutine represent the timeline.
Example:
IEnumerator StartRound(RoundInitValues config)
{
yield return StartCoroutine(StartRoundCountdown());
ApplyConfig(config);
roundState = Active;
OnRoundStarted?.Invoke();
}
This ensures:
- Countdown completes before round begins
- UI transitions happen cleanly
- State changes remain synchronized
Without coroutines, this logic becomes fragile and order-dependent.
Enemy Spawning Architecture
Enemy spawning is handled by a dedicated SpawnController.
It runs a loop while the round is active:
while (roundManager.IsRoundActive())
{
yield return WaitForRandomSpawnDelay();
if (roundManager.CanSpawn())
{
SpawnEnemy();
}
}
RoundManager acts as the authority for spawn permission.
SpawnController acts as a worker.
This prevents spawning logic from becoming fragmented.
Restart Strategy: Full Scene Reload vs Soft Reset
One key decision was how to reset the game.
Two options:
Soft reset:
- Reset variables
- Destroy enemies manually
- Reset UI manually
Hard reset:
- Reload the entire scene
I chose hard reset:
SceneManager.LoadScene(SceneManager.GetActiveScene().buildIndex);
Benefits:
- Guaranteed clean state
- No hidden bugs
- No lingering references
- Simpler implementation
This eliminated entire categories of bugs.
TimeScale and UI Timing
The game pauses when a round ends using:
Time.timeScale = 0;
However, coroutines using:
WaitForSeconds()
also pause.
This broke countdowns and UI transitions.
The fix was using unscaled time:
WaitForSecondsRealtime()
and
Time.unscaledDeltaTime
This allows UI to continue while gameplay is frozen.
Enemy Detection and Proximity System
Initially, I used trigger colliders to detect nearby enemies.
This caused problems:
- Enemies spawning inside trigger radius were not detected
- Trigger events could be missed
I switched to explicit scanning:
Physics.OverlapSphere()
This provides deterministic detection every scan interval.
Each detected enemy provides:
- Distance to player
- Signed angle relative to player forward direction
This data feeds the proximity system.
Directional Threat Aggregation
When multiple enemies are nearby, the game computes a weighted average threat direction.
Each enemy contributes:
weight = 1 - (distance / maxRange)
directionVector = normalizedDirection * weight
All vectors are summed and normalized.
This produces a single direction bias for the UI shader.
This was far cleaner than trying to represent each enemy individually.
The Proximity Visual Cue Shader
Audio alone was insufficient when multiple enemies were present.
I implemented a screen-edge "oil slick" shader.
It renders procedural cellular noise biased toward threat direction.
Inputs:
- danger intensity (0–1)
- threat direction vector
The shader animates organically and responds dynamically.
This dramatically improved readability while preserving the horror aesthetic.
UI Architecture
UI was implemented using:
- CanvasGroups for fading
- VerticalLayoutGroups for alignment
- Event-driven updates
Round labels fade in and out using coroutines.
Countdown uses realtime timing.
UI does not poll game state continuously.
It reacts to state transitions.
Difficulty Scaling
Difficulty increases using generated round configuration.
Two independent variables scale:
- Max concurrent enemies
- Total enemies per round
Concurrent increases slowly.
Total increases faster.
This creates rising pressure without overwhelming instantly.
Audio System Challenges
One bug occurred when AudioSources were destroyed during scene reload while coroutines still referenced them.
This caused MissingReferenceExceptions.
The fix was ensuring coroutines validate references before accessing:
source != null && source.isPlaying
This highlights how asynchronous systems interact with object lifetimes.
WebGL Deployment Considerations
Exporting to WebGL introduced new constraints:
- Audio requires user interaction before playback
- Pointer lock requires user gesture
- Compression requires correct headers
For simplicity, I used uncompressed builds during development.
The game integrates into a Next.js app via iframe embedding.
Lessons Learned
Clear ownership prevents chaos
Separating RunManager and RoundManager prevented state entanglement.
Events scale better than polling
Polling works early, but events prevent exponential complexity growth.
Coroutines simplify temporal logic
Coroutines naturally model sequences.
Hard resets are safer than soft resets
Reloading scenes avoids hidden state bugs.
Explicit scanning is more reliable than event triggers
Deterministic detection avoids missed edge cases.
Visual feedback improves fairness
Audio-only gameplay becomes unreliable under pressure.
Subtle visual augmentation preserves atmosphere while improving clarity.
Final Thoughts
This project reinforced that even small games benefit enormously from clean architecture.
Many of the hardest problems were not gameplay mechanics, but system coordination, timing, and state management.
Unity provides powerful tools, but using them correctly requires deliberate structure.
This prototype now has a solid foundation for expansion, refinement, and deployment.
The core loop is simple.
The systems supporting it are robust.
And most importantly, it feels tense, readable, and fair.