Project Bismuth Overview & Development Process
Project Bismuth was a cross-platform Roguelite PVP game. Players would create a character and start in the Tavern - the non-combat social space. In the tavern, players could customize their character’s appearance by unlocking hair and clothing colors, and unlock and customize their weapons used in the dungeon. After jumping in the dungeon, players must fight monsters AND other players to earn valuable loot and gain experience. Experience allows players to gain new abilities through their run, making each run in the dungeon unique and exciting. In order to keep their loot, players must extract before they are killed by other players, monsters, or the miasma.
I lead UI/UX on Project Bismuth. I started as the primary designer creating and implementing all designs and over time moved to balancing design work with leading other UI/UX designers through feature work. I was heavily involved in high level feature planning, system design, milestone progress management, design iteration, and testing. From start to end, development was done by a small team in a year and a half.
Samples of Menus in the Game
Examples of UI screens taken from play footage. The UI is a blend of skeuomorphism where applicable and lightly physical UI for the rest. As features were made with lean development, much of the UI was built via an atomic design system. This allowed for ongoing efforts to be made over new feature development to improve on visuals overtime with the goal of a fully realized cohesive visual style by launch.
Milestone Progress Structure
In Project Bismuth, development was done via a milestone delivery structure. Each milestone was a month long (4-5 weeks) with a primary “marquee” experiential deliverable and usually 2-3 additional smaller deliverables. These were scoped timeboxed ambitions in order to target representative experiences in an aggressive timeline.
For long term planning (Ref 1), this was a living document organized and maintained by the leads on the team with a focus to getting to launch in Q3 of 2024. Stretches of time were defined by the area of focus (Meta Progression, Monetization, etc) with each milestone being defined by it’s marquee experience. Underneath were organized the various strike-team/pod groups focus work. This was used to organize high level deliverables and facilitate resource planning for the leads on the project.
As one of the leads, I often used this to start socializing upcoming features to my design team as they needed to work ahead of this schedule to have design ready, as well as prepare the strike team for what was coming next. As testing results came back each milestone, I would utilize the data and learnings to re-evaluate the roadmap to make sure that we were focusing on the right things in the right order to build certainty with the players’ experience. Occasionally, testing would reveal crucial needs in existing features that may require expansion or iteration on said feature, in which case I would organize a proposal to present to the leads on what was needed, why it was important, and how I would suggest the schedule be adjusted to accommodate that need.
For each milestone, usually a lead would own and organize all development progress for their deliverable including: pre-design, planning, organizing, tracking, communicating progress throughout, and presenting to the wider org. I was always a dedicated feature dev lead, usually for the marquee feature most milestones.
This milestone development structure has a strong lean towards being highly reactive rather than proactive about planning. The philosophy is that people are notoriously bad as estimating how long any task is going to take, especially when prototyping or doing anything new. So rather than dedicate the time to making faulty assumptions, it’s better to start working and learn and adjust as development takes place. This requires a lot of attentiveness and flexibility on the part of the lead, being able to track progress, and shepherd changes to the plan in order to keep to a experiential target. (eg: “Players can spend resources to unlock new weapons”)
The second provided image (Ref 2) is an example of a milestone’s higher level tracking board used to help keep the broader team and EP up to date on big software initiatives. Each lead would also have a separate space for their strike team that breaks down the work similarly but much more granularly to ease with interdependencies and time tracking.
This process is very effective for those who are able to juggle and maintain focus and organization throughout the development cycle, though it proves to be much more difficult to maintain at larger scales (usually at or above 24-30 people).
UX/UI for Features
UI Kit
As is common practice in development today, the UX and UI was always built upon a flexible and expanding UI Kit built in Figma. This kit was built to be as 1:1 in organization and layout structure as possible between Figma and the game engine (Unreal). This parity was essential to allow developers to maintain organization and have an understanding how to find and use the kit items in both environments. For additional ease-of-use, the export pages on these libraries had the file path embedded into the name of the sections holding each icon, so developers could easily find exactly where they were in the project (Ref 7).
The Library was sectioned into several different files (Ref 3). This was to help section like-content for easier browsing for assets in outside feature files, as well as to help keep library files more manageable in size as the project grew over time. The direction of the kit was built upon the philosophy of atomic design; starting from subatomic properties (colors, styles, fonts) to atoms (basic components like icons & panels), molecules (buttons, text fields, item tiles, etc), and up to organisms (modals, flyouts, etc).
These kit pieces are specifically intended to be any asset that would need to be used in multiple features or contexts. Any new feature design would start from incorporating assets from these libraries, and expand from there as needed.
Anatomy of a Feature File
Cover
To keep a consistent and tidy files and allow the state of designs to be understood throughout the process, I established a structure for feature files for myself and my designers to follow. This structure was designed in to the feature cover component for easy reference (Ref 8). In the feature project folder, there was a blank feature template containing this structure for easy setup. The covers were also designed to appear different from the library files so they may be easily distinguished apart at a glance.
Graveyard
Although sorted at the bottom of the list, this is typically where any feature design explorations would start in this file. The graveyard is specifically a space for quick, messy explorations of any type of design problem that a feature may have from overall layout structure down to the nuance of icon design. As we worked through these design problems, we would contain the explorations inside sections in the page with an emoji to denote the problems state (In progress, needs review, and completed) and with a name that describes the design problem being solved within. This is to allow designers to show a history of how they solved their various problems, as well for other devs to be able to “read through” our UX/UI process in a space that would otherwise be an unintelligible mess.
Any approved designs in this page would get to then “graduate” up to the components page (or sometimes a library file).
Tap-Throughs
Sometimes when working through the structure of flows, it is necessary to pull together interactable prototypes to review with the team to understand how the flow feels. To support this, there is a page specifically dedicated to organizing these flows. This page operates on the same rules generally as the graveyard and will typically contain feature components as well as unique fast-made assets for quick explorations.
Components
The beating heart of the feature file, the components page is where any approved component will belong. These components typically start as a combination of UI kit pieces with simple low-fi greybox assets. These components are organized by discreet screens, with those screens sub-components being organized beneath. The screens available here are largely used in the wireframes page, so as to maintain the connective tissue and allow design updates to propagate as they are made here.
Most new components usually start their location here, but if any new asset is expected to be needed outside of the context of it’s specific feature, that component would then be instead moved to one of the project libraries instead.
Wireframes
This is the page that would house the unique flows that would need to have design coverage for a feature. These flows would typically be outlined in Miro first as high level flows from feature verbs, only in this case, we are showing the full screens.
As Bismuth was being designed from day one as a cross-platform game, the flows also needed to be shown for monitor and mobile contexts and scales.
The screens shown in these flows utilize the layouts in the components page. This largely allowed the wireframes in here to be self-maintaining, as updates made to the base components (especially visual polish) would propagate through these instances of the screens automatically.
Export
For any visual asset that is needed for this feature but does not graduate to a library asset, there is a page in the feature file that houses the frame that exports that asset. This is to allow any updates to be easily located and executed.
Testing
Especially working on a game with a lot of innovation, data is key to developing confidence in the software being produced. Our team leaned heavily into many forms of testing: daily playtests, feature roman voting, prototype A/B tests, Milestone validation, and quarterly game validation. As the experience lead, I either directly designed and lead the majority of these testing initiatives, or collaborated heavily on them.
As a standard part of the process, the team would playtest the game for 30 minutes almost every day in order to keep a pulse on the game’s progress, but also to encourage teamwide investment in how their work affecting the software experience as a whole.
Roman Voting
Roman voting is a validation process done internally to get a “temperature check” on how successful a design/implementation of a system is. Team members are directed to play a specific segment of a game “normally” and are then asked specific yes/no questions on how they felt it played. These questions are usually derived from the goals established at the beginning of the design process. All team members are required to vote, and can only vote yes or no. Any “no” vote is followed by the voters explaining the issues that lead to the “no”. This allows early and iterative learnings on the experience before the design can be calcified and hard to change.
A/B and Milestone Testing
If during the design process, we hit a branch where two or more potential solutions felt valid, we would leverage external testing to get some signal on effectiveness of our design assumptions. Early to mid development we made heavy use of a service UserTesting since it offered very fast turnarounds on especially small experience tests, and also gave gameplay footage which provided a lot of insight into the actions players were taking unconsciously.
The goal of milestones was to complete an experience to a “presentable” level so that the game felt incrementally more “complete” without feeling distractingly debuggy. At the conclusion of every milestone, a build would be packaged and sent out to external playtesters via UserTesting.com or GameTester.gg - anywhere between ~24-100 players.
The focus of the test would be to have the players experience the whole game but to focus on the newer features, and then fill out a questionnaire to gauge their reception of the designs. If any design was coming back from testing lower than we were looking for, that feature would be evaluated as a potential candidate for prioritizing the next milestone to try another iteration to improve on player sentiments.
Phoenix Labs had a bi-weekly company check-in to allow teams to show their progress on the various games. Usually for the end of month show-and-tell, I would present on behalf of the team to the rest of the company the work that had been done, what we had learned, and how we were planning to proceed next.
Quarterly Game Testing
After a year into development we began running quarterly larger scale external tests to get validation on the game mode and broader appeal to our target player base. This required more upfront planning and setup since it was a larger group we needed to support, as well as focusing the requests of the players and the survey to get useful data back. I collaborated heavily with our EP on these tests.
The big tests themselves had their own internal goals to meet, as well the two testing platform limitations required planning on how to direct players and survey them afterwards. I drafted and refined the recruitment screeners, the tests themselves, and the following survey.
Once the data got back, it was synthesized into numbers and feedback that we used to validate certain designs, and prioritize future feature planning.