Visual Inspector & Dev Tools
Note: This documentation is for the old 0.6.0 version of A-Frame. Check out the documentation for the current 0.7.0 version
This section will go over many useful tools that will improve VR development experience:
- A-Frame Inspector - Inspector tool to get a different view of the scene
and see the visual effect of tweaking entities. The VR analog to the
browser’s DOM inspector. Can be opened on any A-Frame scene with
<ctrl> + <alt> + i.
- Motion Capture - A tool to record and replay headset and controller pose and events. Hit record, move around inside the VR headset, interact with objects with the controller. Then replay that recording back on any computer for rapid development and testing. Reduce the amount of time going in and out of the headset.
- React DevTools - If using React or Preact with A-Frame, we can use React DevTools to inspect React component props, state, and tree.
- Hot Loading - If using React or Preact with A-Frame, we can use hot module replacement (HMR) to tweak React Components in real-time. All application state and internal A-Frame entity data will be preserved, letting us see changes in real-time even in the headset.
- Redux DevTools - If Redux is integrated with A-Frame, we can use Redux DevTools to inspect the application state and all of its changes. Or do actions such as time travel by committing and rewinding state.
We’ll go over GUI tools built on top of A-Frame that can be used without code. And touch on other tools that can ease development across multiple machines.
The A-Frame Inspector is a visual tool for inspecting and tweaking scenes. With the Inspector, we can:
- Drag, rotate, and scale entities using handles and helpers
- Tweak an entity’s components and their properties using widgets
- Immediately see results from changing values without having to go back and forth between code and the browser
The Inspector is similar to the browser’s DOM inspector but tailored for 3D and A-Frame. We can toggle the Inspector to open up any A-Frame scene in the wild Let’s view source!
The easiest way to use is to press the
<ctrl> + <alt> + i shortcut on
our keyboard. This will fetch the Inspector code via CDN and open up our scene
in the Inspector. The same shortcut toggles the Inspector closed.
Not only can we open our local scenes inside the Inspector, we can open any A-Frame scene in the wild using the Inspector (as long as the author has not explicitly disabled it).
See the Inspector README for details on serving local, development, or custom builds of the Inspector.
The Inspector’s scene graph is a tree-based representation of the scene. We can use the scene graph to select, search, delete, clone, and add entities or exporting HTML.
The scene graph lists A-Frame entities rather than internal three.js objects. Given HTML is also a representation of the scene graph, the Inspector’s scene graph mirrors the underlying HTML closely. Entities are displayed using their HTML ID or HTML tag name.
The viewport displays the scene from the Inspector’s point of the view. We can rotate, pan, or zoom the viewport to change the view of the scene:
- Rotate: hold down left mouse button (or one finger down on a trackpad) and drag
- Pan: hold down right mouse button (or two fingers down on a trackpad) and drag
- Zoom: scroll up and down (or two-finger scroll on a trackpad)
From the viewport, we can also select entities and transform them:
- Select: left-click on an entity, double-click to focus the camera on it
- Transform: select a helper tool on the upper-right corner of the viewport, drag the red/blue/green helpers surrounding an entity to transform it
The components panel displays the selected entity’s components and properties. We can modify values of common components (e.g., position, rotation, scale), modify values of attached components, add and remove mixins, and add and remove components.
The type of widget for each property depends on the property type. For example, booleans use a checkbox, numbers use a value slide, and colors use a color picker.
We can copy the HTML output of individual components. This is useful for visually tweaking and finding the desired value of a component and then syncing it back to source code.
We can press
h key to see a list of all the shortcuts available.
Room scale VR can be cumbersome to develop. Every change to the code, we have to:
- Open a web page (often running on a separate computer)
- Enter VR
- Put on the headset
- Grab the controllers (often having to turn them back on)
- Do our test run with the headset and controllers
- Take off the headset and controllers and pop open the browser development tools
- Restart the browser if necessary since they’re currently experimental and buggy
Room scale VR development becomes molasses. But we’ve come up with a workflow to supercharge VR development so we can automate, develop rapidly, and on the go with A-Frame Motion Capture Components.
With the motion capture components, we can record VR actions (e.g., headset and controller movement, controller button presses), and repeatedly replay those VR actions, on any device from anywhere without a headset.
Below are several real-life use cases of motion capture vastly improving VR development ergonomics:
Faster test trials: No need to take the headset on and off, enter VR, grab the controllers, do manual actions, or restart browsers. Just record once and develop for hours on a single recording.
Development on the go: Rather than having to re-enter the headset and VR every time we want to test something, we can take our recording, send it to, say, a Macbook, head out to a coffee shop, and continue developing our VR application using the recording on a stable browser. Add some
console.logs, refactor our application, or freeze the replay with the A-Frame Inspector (
<ctrl> + <alt> + i) to poke around.
Automated integration testing: We can record a bunch of different recordings as regression test cases and QA. Store the recordings, do some development, and occasionally click through the recordings to make sure everything still works. We store multiple recordings in projects for later testing.
Multiple developers sharing one headset: One developer can take a recording with the Vive and go off somewhere else to develop with the recording, leaving the Vive free for the other developers to use or take recordings.
Requests for recordings: Perhaps we don’t have a Vive or Rift handy but our colleague or friend does. Give them a link to our web application maybe via ngrok (isn’t the Web awesome?), have them take a recording, and send it to us! Now we’re unblocked from developing.
Demonstrating bugs: Or let’s say we found a bug in a VR web application and want to show it to someone. Take a recording and send it to them to debug. No need to give bug reproduction steps, it’s all in the recording!
- Automated unit testing: We can use recordings with unit testing frameworks such as Karma and Mocha to replay the recording and make assertions. For example, touch an box and check that it changes color. See A-Frame Machinima Testing by William Murphy for an example.
Read the Motion Capture documentation for more information. Here’s how to set up the recording:
- Drop the Motion Capture Components script tag into our HTML file (e.g.,
- Add the
avatar-recordercomponent to the scene (i.e.,
- Enter VR
<space>to start recording
- Record movements and actions
<space>to stop recording
- Save the recording JSON file or upload it by hitting
uto get a short URL to share between computers
Now we can replay the recording. We can try recording the camera with WASD and
mouse drag controls right now on a desktop. Head to the Record
Example, open the
browser Console to get feedback, and hit
<space> to start and stop recording!
By default, the recording will also be saved and replayed from localStorage. If we want to take our recording on the go, here’s how to replay a recording (assuming we already have the script tag above):
- Put the recording file somewhere accessible to the web page (i.e., in the project directory or online)
avatar-replayercomponent to the scene (i.e.,
?avatar-recording=path/to/recording.jsonto the URL or set
<a-scene avatar-replayer="src: path/to/recording.json>
Then replay the recording on any device from anywhere without a headset to our heart’s content. Get in the headset, record some clicks, and then from a laptop, we can build event handlers on top of the controller events we emitted in the recording.
The A-Frame Motion Capture Components have a Spectator Mode feature that can be
enabled from the
This lets us view the recording from a third-person view. This can be useful since it is hard to see what was happening in first person. The first-person view is naturally shaky, hands would occlude the camera, actions would be happening off-screen, and its hard to focus on one place if the camera is always moving away. The Spectator Mode lets us free move around the scene and view at whatever angle or focus on whatever area.
People have built tools on top of A-Frame to abstract away code via an interface or application, making content creation even easier. These act as more complete editors rather than developer tools.
“The new design app for virtual reality. Whether you want to prototype VR interactions, or create fully immersive experiences, WebVR Studio helps you get there. Design impressive VR scenes for phone and desktop browsers.”
“Manage your virtual reality spaces and assets like you would manage blog posts. Run it on your own server. All you need is PHP and a database (eg. MySQL, MariaDB).”
“Create your own VR story now. Fader allows you to create and publish VR stories. Add multiple layers of information to your 360 spheres, design scenes and tell your story. Easy, fast and web based!”
“Fader is designed and developed by Vragments, a Berlin based Virtual Reality startup. Vragments is a team of technologists and journalists who are dedicated to bring new ways of storytelling to content producers by providing an easy-to-use VR tool.“
With VR development, it is common to develop across multiple machines. For example, developing on a laptop and testing on a VR desktop. The tools below help with that process:
Synergy lets us share one mouse and keyboard between multiple computers. For example, this lets us control a desktop from a laptop. We can code from the laptop. Then using the laptop, we can control the desktop to refresh the browser, enter VR, visit different URLs, take motion capture recordings, or inspect the browser’s developer tools. No need to have two sets of keyboards and mice on our desk space.
A Synergy Basic license costs $19, but it is well worth it if we are developing with multiple computers.
ngrok lets us easily expose a local development server for other computers to access, even through firewalls or NAT networks. The steps are to:
- Download ngrok
- Open the command line and head to the same directory ngrok was downloaded to
- Have a local development server running (e.g.,
python -m SimpleHTTPServer 8080)
./ngrok http <PORT>(e.g.,
./ngrok http 8080)
- An ngrok instance will spin up and provide a URL that the other computer can
access from its browser (e.g.,
ngrok is most ergonomic if we get a premium account and reserve our own domains to get easy-to-remember URLs. Otherwise, the URL is randomized each time and very hard to type. The Basic license provides 3 reserved domains for $5/mo, and the Pro license provides 2 simultaneous instances for $8.25/mo. See ngrok pricing details.
With reserved domains, we can reserve a URL like
abc is just an example, and it is currently taken. Then every time we start
ngrok, we pass the domain:
Every time we start ngrok, we can reliably use the same URL. To make it even simpler, we could add a Bash alias:
And use it simply from the command line from anywhere:
Alternatively, we could have both computers connected on the same local
network, and use
ifconfig to point one computer to the other
using the local IP address (e.g.,
http://192.168.1.135:8000). But that can
disrupt workflow because we have to run commands to get the local IP address,
the local IP address often changes, and it’s hard to remember and type the IP.
Motion capture was described above, but to reiterate, motion capture helps
immensely in developing across machines. The VR recordings can be shared, and
replayed on other computers. After taking a motion capture recording with
u on the keyboard to upload the recording data and get a URL
that can easily be transferred (versus emailing ourselves). Alternatively,
recordings can be transferred using file.io.