Interactive Tour Display w/ Stories & Sound

Categories: UI design, sound design, interaction design, accessibility, exhibit design, environmental design, HTML, CSS, JavaScript (Howler.js, AJAX, JSON)


The client is a large missions organization/religious nonprofit with a tour incorporated into their headquarters, the tour is being renovated after 20 years, and they’re moving a lot of the exhibits to large screens incorporating stories on video, animated infographics, and still photo exhibits.

We were given the task of creating a piece that incorporated text stories read by a narrator, which included small avatars depicting the people involved in the story, or an icon representing some integral part of the story. The 14 stories were divided into two categories – people who had been killed in the course of their work, and stories of apparent miracles – and were all different lengths. Some had specific dates tied to them, and they all had locations. It was necessary to be able to add new stories, remove selected ones, and activate or deactivate them as needed.

The display will be implemented on a 75″ 4K touchscreen television mounted vertically. The product would run on a device that would allow for HTML, CSS, and JavaScript to run on it.

A consultancy was doing the broader tour layout and provided mockups of the space.

Plans included providing for ADA compliance to allow persons in wheelchairs to access the interface.

Top-down layout of the tour. Visitors would move from left to right. Our display would be at the location labeled P.

Reach Considerations

We were not provided with any information of how high off the ground the screen would be mounted, but were told that the images were to scale. We knew that the wall was 8′ tall, and the screen was a 75″ touch screen, which allowed us to scale out the rest of the display.

It also seems like their “model” was about 4’9″.

The entire project was happening during the height of the second COVID-19 wave in Central FL, so I was unable to interact with the 75″ screen in person, or to recruit people to do user/interface testing. I am the average height for an American male (5’9″), and my wife is average for an American female (5’4″), and we have children ranging from 9 years old to 10 months old. Additionally, we had a small number of friends that we had “containered” with during the worst of the pandemic. I laid out the proper placement of the screen on a wall.

I combined these data points with ADA compliant reach-ranges. To get this, laid out over the initial visual designs (created by a coworker):

I created mockups to assess how they would feel in a real world space.

Challenges to Prototyping & Refinement

Screen Distance

The visual designer and I were very aware of our limitations due to not having access to anything approaching a 75″ 4K screen in portrait orientation. Without approximating the experience it would be far too easy to oversize images, or undersize text (as in the mockups above). I got ahold of a 4K monitor that could be turned, but I had to make sure I was the proper distance from the monitor to “test” the designs, and make sure the elements were readily discernable.

According to the Society of Motion Picture and Television Engineers, the optimal viewing angle in theaters is between 35° to 55° of the horizontal field of vision (FOV), and the middle 60° of the vertical FOV is considered the “central” portion. We decided to use 40° of the vertical FOV as our “sweet spot” to determine how far back users were likely to stand when not interacting directly with the screen. We also allowed for a range of up to 50° vertical FOV, and down to 30°. With our screen being approximately 65″ tall, I calculated the FOV based on distance using the equation:


Viewer DistanceScreen SizeVertical FOV
55″ / 4’7″65″49.76°
77″ / 6’5″65″40.17°
113″ / 9’5″65″29.91°

My monitor is 27.9″ at its widest dimension. I had to place it about 33″ away to get it to approximately 40° of my vertical FOV.

Font Size

The primary element that needed to be dialed in properly is the font size for the body copy. Generally, minimum reading size is understood to be 13 arc minutes (13/60ths of a degree) of the FOV.

Illustration via‘s font-size calculator.

Assuming that Chrome’s standard font size of 16pt text is sufficiently readable at the 33 inches that I was from my screen. I calculated it’s size in arc minutes, using the same equation as above, but multiplying the results by 60 to get arc minutes instead of degrees, getting a result of 17.36 arc minutes, at that scale rounding up to 18 is negligible, and would only improve readability.

Units & Affordances

Both my visual designer and I are used to creating designs that will be implemented on a wide variety of screen ratios and sizes, and we were operating in that paradigm. It was only at this point that we realized that our screen size would always be the exact same size and ratio. This led us to decide to think of all elements’ sizes in terms of the vh and vw units.

At a 4K portrait resolution, and at our assumed viewer distance 18 arc minutes translated to approximately 0.8vh (8/1000 of the screen height).

After some feedback, and to make the text of the stories crystal clear to viewers, We ultimately increased the body size to 1.155vh. This is approximately equivalent to 25.6 arc minutes, and 22pt text on my own screen.

One of the tests I conducted in sample HTML documents.

Back End Structure

Making it Easy to Update

With the requirement that the experience be updatable, it was assumed that it should be as easy to update as possible. During the ramp-up to the project we were provided with a document with all of the stories (including some that would not be active at the time of launch), and the images of the people who had been killed.

I created a JSON files for each category with the following structure:

  name:       string
  slug:       string (unique)
  active:     boolean
  location:   string
  date:       string
  paragraphs: [ an array of strings ]

I then used Mustache.js to create templates for each story, and each category’s menu pages. This allows the entire experience to be a single page, removing loading time between button pushes, and allowing for smoother transitions. All of the stories are loaded in using AJAX at the launch of the page.

Updating Process

Adding new content is a 3-step process:

  • Add the content to the appropriate JSON file.
  • Add the representative avatar/icon to the associated image directory. It must be named the same as the slug field in the story’s JSON entry.
  • Add the audio track to the audio directory. The file must be named slug.mp3.

When the page is reloaded, it will load the new story.

Activating a story that is in the JSON file only requires setting active to true.

Storytelling Experience

The storytelling experience is the core of what we were creating. The basic idea of what it would be is a way to take in the story both audibly and visually. As such it was necessary to coordinate the audio of the slug.mp3 with the scrolling of the story (if necessary). Additionally, a music bed was requested under the story audio tracks, but the team didn’t want to have to coordinate each story in a single track preferring that there would be a separate bed.mp3.

I landed on using Howler.js, to coordinate the tracks, and the scroll. Primarily making use of the following functions:

  • .duration() to retrieve the length of the story mp3.
  • .seek() to make sure that the stories always started at the beginning when pages are reloaded.
  • And the obvious .play(), and .pause().

The JS file does the following when the each story is loaded (if the UI is not muted):

  1. Fade in the page, and then scroll it to the top (because a user could have been scrolled lower, the browser maintains that scroll positioning).
  2. Find the length of the story mp3. Using:
    duration = eval(slug+"_audio.duration()")*1000
    Note: If this product was going to be used online, eval() would have to be avoided as it is patently insecure.
  3. Set the position of playback for the story mp3 to the its beginning.
  4. Start scrolling the story text with a duration equal to the duration of the story mp3.
  5. Start the playback of the story mp3.
  6. When the story text gets to the end it will stop, approximately at the same time as the mp3.
  7. 3 seconds later the audio bed fades out.

If the UI is muted, there is no auto scroll, and will load in a paused state. The user can hit play, which will start the auto-scroll, but not the audio.

When the user navigates to a “next” or “previous” story, the audio and scroll will pause, fade out, and then proceed through the list above.

The pause button pauses the audio as well, and restores from the current playback position, unless it’s within the first 3 seconds.

The UI also includes a “restart” button.

Accessibilty Considerations

Every time a menu page is loaded, it fades in as it scrolls to the top in an attempt to indicate its ability to scroll down. If a user scrolls down on the touch screen, the links to all stories can accommodate ADA reach range guidelines.

Throughout the experience, all control buttons are within the ADA recommended range, as well as the average reach-range for the average American adult.

Monkey-Test Considerations

The groups that regularly go through the tour experience range from a handful of middle-aged vacationers to hundreds of middle-school students over a 2 hour period. Inevitably, someone is going to tap some combination of buttons that will “break” the experience, requiring it to be reloaded.

We talked with the tour designers and managers and found out that they do not intend on having any other kind of interface (keyboard or mouse) connected to this display, and would not be able to easily interact with it other than through the provided touchscreen.

To that end, we added a reload “easter egg” so that 8 consecutive taps in an invisible 5vh x 5vw box in the bottom-left of the screen will refresh the entire page and re-load all of the story and audio data.


I have uploaded the entire project repository on Github at:

Note, for the purpose of development, I recorded or synthesized the story mp3s, chose a stock video, and selected a stock music bed for the project. All of these were intended to serve as artifacts to work around and with, and will be updated & improved in the final implementation.

Viewing on Other Devices

If you would like to see and interact with the project, it is available at:

Nothing will look right unless you view it at the proper aspect ratio of a 4K screen in portrait view. You can achieve this in Google Chrome’s web tools panel buy creating a custom device view and setting the resolution to

Width: 2160
Height: 4096

When the page loads, the guidelines displaying the average and ADA reach parameters will show on the page, you can hide them by tapping the bottom-left corner of the interface.

Divergent Events Labeling, Google Analytics, Sheets, Scripting

I’ve been assigned to a project at work that is going to require a significant amount of quantitative and qualitative user data to complete. Our main product is an app that was released in 2012, it’s on it’s 3 major version. Some significant bits about it:

  • Localized in 18 languages.
  • Content in 1600+ languages.
  • 120,000 total videos.
  • Android and iOS app.
  • Separate contractor development teams.
  • Managed by same internal person.
  • Android: 65% of users (~67,000 in 2017)
  • iOS: 35% of users (~36,000 in 2017)

Analytics Platforms

  • GA
  • Adobe
  • Crashalytics
  • ME2
  • Flurry
  • Currently testing AppSee and UXCam

As I’ve dug through data, I’ve begun to realize that our event reporting is inconsistent between OSes. Event labels have not been given precise and clear definitions. This is going to be a significant hurdle, so I’ve been seeking to sort that out first.

Signs of Trouble

The first sign of problems was discovered during the final feature build before the research project was to begin. One part of the feature integration was adding an icon/button to an already-crowded interface. The new feature was one that was to be highlighted as it was a part of an integration with an app that we wanted to promote the use of. It was clear that we were going to need to consider placing one or two previously-existing icons within a context menu to make room for the new one. We did not have the time resources to run a round of user testing to help determine how which ones were going to be harder to find, so I was asked to make recommendations based on analytics alone.

From the collected event data in GA, I discovered the following anomalies:

  • Of all of the events possible to perform with the icons on the taskbar in question, the “Sharing – Generic” action was reported as 90% of all actions.
  • Of the “Sharing – Generic” actions, 90% of those were Android (Android user share was ~65%).
  • For all other actions on this taskbar, 90% of the events were iOS.
  • “Sharing – Generic” was 98.5% of all Android actions for this taskbar.
  • “Sharing – Generic” was 30% of all iOS actions for this taskbar.

The UI for the two OSes are different but do not seem different enough to warrant such vastly different event reporting numbers.
I met with the development teams and discovered that they were using “Share – Generic” in completely different ways. For Android every single time a user touched this button it logged “Share – Generic” and only that event. For iOS they were logging it only if it was successfully shared using an app outside of the ones that had their own specific sharing event assigned, e.g. “Sharing – Facebook”, and some recent versions of the app logged the string of the app name, e.g. “”.

Digging into the data

Prior to this point, I didn’t have access to some of the analytics platforms. I had previously been assigned only as a UI designer and was still onboarding into the UX role. I was granted access to Adobe Analytics and Flurry. The learning curve on AA was too steep for the quick turnaround that was necessary for the project, and I discovered that Adobe wasn’t consistently logging events to Flurry at all.
I decided to stick with GA to pull out the data that I could to begin a full event-logging audit.

I generated two reports and pulled them into a spreadsheet:

  • All events logged in 2017.
  • All events logged by any app version released 1 January – 31 May  2018 (This was to ensure completely different data sets.)

Using Google’s scripting features I scripts determine & extract:

  • “All Event Labels 2017” – Unique event labels and total events logged for each across all OSes and Versions used in 2017.
  • “All Event Labels 2018 Releases” – Unique event labels and total logged in each app version released in the first 5 months of 2018.
  • “Android Only”/“iOS Only”/“Both” – Which events were common or unique to either OS, that is which events were only logged in only Android, only iOS, or to both.
  • “None in 2018” – Which events were logged last year, but not logged at all in the first 5 months of 2018

I line-by-lined through “None in 2018” comparing it with “All Event Labels 2017” to determine which ones were likely to be deprecated intentionally, and which ones seemed to be accidentally dropped. Some of these were confirmed with conversations and live-testing in GA’s “Live View.” (Neither developer team had a comprehensive list of currently integrated events.)

Pursuing Solutions

I compiled a spreadsheet with the following categories:

  • iOS Only Reported
  • Android Only Reported
  • Both Reported
  • Reported in 2017, but not 2018
  • Confirmed Deprecated
  • Not reporting, but should be

I asked the developers to look through the lists and confirm which ones should belong where along with notes of data parameters that are currently being collected with each.

After determining which ones are currently in use, I worked with the developers to define clearly what should trigger each event, which events needed to be added, and how to sync up the two platforms to make sure they’re reporting the same, or at least functionally equivalent data.

In the end, this process was a catalyst for a full-rebuild of the app on both platforms. I helped to create the event labeling specifications.

1995 Chevy Lumina Battery Access, or “Did the designers ever OWN a car?”

Broken socket-connector & cracked socket.

Broken socket-connector & cracked socket.

In the continuing saga that started Tuesday night with our car breaking down…

Took the car to an Auto Zone to get the alternator and batter tested – short story shorter, the battery had to be replaced and I had to do it. Auto Zone, I believe, will usually change your battery for you, but not for me. The designers that GM employed to design the Chevy Lumina (at least the 1995 version), must have been new on the job.

The process to get the battery out was this:

1) Open hood.

Simple enough, that makes perfect sense.

2) Remove three bolts. Remove two bolts one one end and swing a support strut out of the way.

One was frozen, but I am so strong that I broke a socket and a socket connector (pictured). For the record, I am super strong. [Note: This is sarcasm. I am not super strong.] Swing the stupid arm that shouldn’t be over the battery out of the way. Why would you put the battery under a support strut?

3) Disconnect and remove the windshield-washer fluid reservoir?!?@!

Why would someone put the windshield-washer fluid reservoir over the battery? This is the stupidest thing ever.

4) Oh, crap. There’s another piece of metal over top of the battery. I guess I’ll remove that too.

I shouldn’t have guessed that would be easy. It wasn’t.

Hello busted knuckes!

I couldn’t get at the bolts very well – there was a hidden one, too. After I wrestled it out from underneath the air filter housing I did a piece-of-metal-etcomy, that thing didn’t go back in. It was far too bent up to be useful anymore.

5) What the crap? Why isn’t the battery moving now?!

There was some other random stupid bolt that was holding it down. Got it.

6) How do I disconnect this thing?

Now – granted – this is my own stupidity here, but I wasn’t sure that I wouldn’t be shocked as I unscrewed the connectors from the side of the battery.

I wasn’t, I’m still alive.

7) The re-insertion.

No problems, just too much to re-insert.

I had avoided, for a lot of years, having to replace the battery in this thing myself. I have hated the thought of it for the 6 years that I’ve been a co-owner of it and now I know that my fear and loathing of the thought were warranted.

After owning this car and other stories that I’ve heard, I will never buy a GM car if I can help it.

The battery was under the windshield-wiper fluid!!!! What the heck?