The first ever installment of Prefabs TLX just wrapped up! Considering the very tight schedule and generally a very busy time of the year for VRChat world creators - it was a smashing success.

But before I go into the final stats and details, lets go back a little bit.

The Idea

The original idea of a VRC-centric developer event was tossed around in different communities for a while. At one point it was even theorized to be a part of the community meetup, but it never came to be.

This was still brought up from time to time though, especially after a new big world release, since a lot of people would be interested in how it was made, but Discord channels are, sadly, not the best place for such detailed discussion.

It was bugging me for a while that we don't have such an event though. I personally absolutely love developer conferences. It used to be my favorite time of the year. But since we don't get to have that now - I thought "why not make one myself?" and drafted a proposal for the Prefabs mod team to evaluate. That was in early November.

If you are interested, here's the original proposal

Due to a lot of tasks at work and other personal things in the way - I was unable to really get to the TLX project until my planned vacation at the end of November. Which I originally planned to spend on Cyberpunk 2077, but we know how that went as well.

The Start

So, on November 17, 2020, I made a global announcement in the VRCPrefabs server asking people to submit their talk proposals via a google form. The plan was to see how many people are interested, determine the time needed for the event and the overall schedule.

Surprisingly, what I thought would be a short one day gathering, ended up attracting 14 talk proposals, 12 of which actually end up being a talk you can now watch on the archive!

I was not expecting such a response at all, so I had to figure out how are we going to organize all of this.

But before that happened - I got to work on the event venue.

The Venue

If you skipped the proposal - here's the core idea:

The venue will be made specifically for this event. It should have proper voice setup so everyone can hear the presenter, while allowing people near each other to quietly chat without distracting anyone (that’s a stretch goal).

The venue should have a camera system that records the presenter with the slides overlaid on top, so it can be recorded as a nice VOD and uploaded to YouTube later.

The venue should have a general hall area that allows for people to go out and chat, take a break or ask questions.

General Shapes

This seemed easy enough. After searching for some references, I came up with a reference board that looked something like this.

TLX Reference Board

From the get-go I tried to recreate the TED stage, with all the wooden structures inside a larger arena. But the more I worked on it - the more it became obvious that its just too large of a space for a typical VRChat instance. So after a while - it was completely scrapped, but here's the latest screenshot.

First iteration of TLX Venue

Funnily enough, the idea for what ended up being the final stage shape and style - came to me out of necessity. I had to make an announcement poster, so I modelled a basic shape in blender just to have a nice custom background. And if you were at the event - you can see how it is basically the exact stage from the final world.

TLX Announcement

With that in mind, and my general obsession with curved wooden shapes and soft lighting - I threw together a basic layout that looked something like this.

First Render

Not very inviting is it? It was bothering me too, but sometimes you just have to work on things for them to start making sense, and after a couple hours more - I had something like this.

Second Render

Much better now! I wanted the place to be more on the cozy side, rather than a very intimidating stage that might've triggered strong stage fright in inexperienced presenters. And that seemed to work, at least based on the feedback I got.

After the general layout was done, it was time to put it into Unity and check out the scale, figure out UVs and set up some materials.

Scale for VR is the hardest thing there is, imho. No matter how many reference cylinders I put into the world - I always end up tweaking the overall scale once I see it in VR.

After some basic scaling and materials, I ended up with something like this.

First Unity Pass

The Walls, The Outside and The Lighting

The walls were the major pain point for me. While they looked great from the audience's side - they were blinding the presenters. And since we're not in the real world - I wanted to skip that part of IRL presentations. Being blinded isn't fun, and is just a limitation of the real world physics.

First Walls

So I re-did them again and again from scratch, landing on the current version that ended up looking like this.

Final Walls

Final Walls

Which caused its own issues with lighting, which required me to redo them a couple times to get properly sized emissions, but it was a better option in the long run.

Then there was the outside area...

I had 4 days left until the event at that point, all the stage parts were ready, but I still haven't written my own talk, and the outside "social" zone was just not there

But I had a general vision of it in mind, at least, so it was just a matter of implementing it fast!

All I wanted from the outside a that point - was a buffer space for people to just chat in-between and during talks without distracting anybody else. At that point the whole social gathering space was cut due to time constraints, so I just needed something simple and functional.

The Beginnings of The Outside

It all started with just a couple of bean bags and some sort of a bench for people to gather around.

In hindsight - it was in a weird spot - away from the general path, so it wasn't used at all. Everyone just gathered around the entrance.

After that I borrowed some ceiling ideas from Star Citizen (as I often do), and modelled some wooden drop-ceilings thing that ended up fitting into the environment quite nicely.

The Outside Ceiling

And with that in place, all that was left to do - is to fill the rest of the space with some sort of... stuff, and here is how it all ended up looking.

The Outside Final

Before publishing I still made some key adjustments to the scale (this again...) and added a timeline of the event. But the rest stayed just as it is on the screenshot above. The plant in the corner covers an awkward spot in the geometry, btw. As it should!

Once that area was complete - I just walled off the rest of the interior that wasn't finished, and called it "good enough" for the first event. Which was a bit of a shame, but I couldn't have physically finished the project otherwise.

I used Magic Light Probes to place the probes. Which generally did an ok job, after I increased the corner spacing and tweaked the fill rate. Then I scattered a lot of IES point lights and that was that. By the way, making good lightprobes with Bakery is still a whole different story, but I think I ended up with something passable by the end of it.

Lightprobe Setup

The whole world in a top-down orthographic view

Ortho

The Tech

To not get myself too burnt-out - I would make a pause in 3D modelling process and switch to code.

Presentation System

The core piece that had to be done - was the presentation player. I had a couple of targets I had to hit with it:

  • It had to use videos for presentations, so no slides needed to be embedded into the world
  • It had to support video slides for animations or... well, videos
  • It had to provide a slide "lookahead" for the presenter, so they can see what's coming up
  • It had to sync for late joiners and provide info for the stream

So with those in mind (and completely forgetting about late joiners) - I built a player using network events and was very happy with the progress. Everything worked great: the slides changed, you could control each slide's length individually and define whether it should autoplay or not, everything was done locally, was responsive and provided a lookahead.

But then I remembered that late joining players are a thing, as well as the ownership, so the whole implementation got rewritten.

If the new VRChat networking was already in place this would've been done differently, but that has not happened yet, at least at the time of writing, so the whole system uses the classic OnDeserialization approach

The flow ended up looking like this

  • There is a core PresentationPlayer behaviour, that has an array of TLXTalk behaviours.
  • The speaker takes control of the player, making them the owner, and hiding all the control UI for everyone else
  • Navigation is done by just setting the slide index on the owner, and letting all the clients catch-up with them
  • Switching talks is done the same way - the global talk index is switched, the players try to load the new video (with ratelimit retries) and after a couple of seconds it appears for presenters (and is silently initialized for other clients)
  • For late joiners the above is a two-fold process, first the talk index is set, the talk is prepared with a delay to let the video initialized, only then the talk slides are synced to the correct position (since we can't seek a video that isn't loaded yet)

So in such a system - the TLXTalk is a source of information for the parent PresentationPlayer to use.

This posed a bit of an issue for the host, as to control the talk index - I had to take control of the player, and that isn't always convenient, as the player is physically on the stage itself. So for that case I made a special host panel that works in a very hacky way (as this was rushed at the very end).

The host panel talk switching is done like this

  • The host owns the panel which has a talkOverrideIndex
  • The hosts sets the index and waits for it to propagate across the network
  • The host calls Sync on the owner of tha presentation player, which in turn reads the talkOverrideIndex and sets the local index to it

Which, in hindsight, maybe isn't as bad of a system as I might've thought originally. It never broke during the event and it provided a nice way to perform and indirect synced action without yanking the owner which has its own set of things to consider.

By the end of it - the PresentationPlayer ended up being the most consistently working thing in the whole event, which was a bit amusing, as it is based on the most issue-filled system there is: ownership and synced variables.

Which I guess is a nice display of the fact that it can actually work.

Audio System

Amusingly, the fully local area trigger based system ended up being the least stable, which was most certainly my fault. Let's outline the requirements first though

  • It had to provide a way to control volume settings based on which zone player is currently int
  • It had to allow restricting the zone only for a particular set of players
  • It had to provide an interface to set up zone settings
  • It had to support late joiners

I used the tagging system to build a RoleManager which would tag all the players with specific roles like host, speaker or audience in case they were no on the list.

In turn the Audio Zones (which I built and tested earlier in my home world), would read those tags and use them for access restrictions. It would also tag the players with zone names which was meant to allow more dynamic volume adjustment, as that required the system to be able to tell which players are in which zones. The dynamic system for local volume adjustment never made it into the final build, though, but the players are still tagged with the correct zone names.

The late joining issue was handled by lifting all the colliders up a 1000 units, and putting them down 5 frames later. While that might sound kinda stupid - it is generally a pretty reliable way to force an OnPlayerTriggerExit and OnPlayerTriggerEnter events. It was also used in the FixAudio method that I added down the line.

This whole operation is required because the players that are already inside a collider when you join - won't fire OnPlayerTriggerEvent. I suspect it is because the players are already there when the UdonBehaviour initializes, or something like that, but I do not know for sure.

At the end of the day, the core flaw of the system was the reliance on poorly-designed RoleManager system, that failed to set the tags correctly, and wasn't synchronized with the late joiner logic, even though I had a proper OnRolesAssigned callback in there, which I forgot to utilize... that's what crunch does to you...

Camera System

The last piece of the puzzle was to have cameras which look at the current presenter, track them and show it on the main screens in the world, as well as on the stream.

  • It had to follow the player's head, framing them nicely for the stream
  • It had to be able to switch between different players when they took ownership of the PresentationPlayer

Most of the logic was handled directly by Cinemachine. The only piece that caused confusion there was simultaneous usage of 2 virtual cameras. Basically the way Cinemachine works is by moving the actual camera between many virtual cameras, driven by a singular Cinemachine Brain component that defines the core behaviour.

If you want two different Cinemachine Brains running at the same time - you need to put them on different layers, and make sure the cameras on those brains - do not render the opposing layers.

While this consumes some of the precious layers - you can reuse some of the defaults, so its not that big of a deal.

The main showstopper there ended up being the tracked player reference. I didn't want to use a synced object, as it would introduce unnecessary lag and possible jitter, compared to the local position of the player's head, so everything was done locally. When a player takes control of PresentationPlayer - their player object is then saved into the target variable of CameraController, which then starts tracking.

There was one issue with that approach - if the player leaves - the null check inside the Update loop is not enough, as the player object is now invalid.

I ended up adding a direct OnPlayerLeft handler inside CameraController which checked if the player who just left is the current target - and just disabled the Update code outright until the next one is set. That fixed all of the issues.

The Tech Art

The other side of the functionality of any 3D thing - are the shaders. The setup was generally pretty simple, with some custom things sprinkled on top both in Amplify and just directly in vert/frag code.

Environment Materials

Since I was on the very tight schedule - I knew from the start that a lot of things will have to be triplanar-mapped to save time on UV unwrapping. While it is costly - most of the things are batched, and the overall amount of meshes is low, so I wasn't too concerned about that.

I ended up using XSEvnironment shader for most of the meshes, as it has all the things I need: Triplanar World/Object space mapping, GSAA, Lightmap Specular, you name it.

But for the floors I knew from the beginning that I would want nicely tiled carpets. And tiled textures on large surfaces like that are usually easy to spot, unless they're completely uniform and then its just a bit too fake and boring.

So I ended up using Silent's SimpleLazyTriplanar shader that offers Stochastic Tiling, which is a way to tile in a unique non-repeating pattern (if you've seen a voronoi noise example, it looks somewhat like that) with some blending, which works wonders on high-density random patterns like carpets, grass, ground, you name it.

SLT shader worked great, but had one thing that was a dealbreaker - no Parallax, which basically kills the whole idea of a carpet.

I ended up modding a very basic parallax technique into it, based on Unity's implementation outlined in this Catlike Coding tutorial.

So I added the parallax map on the top

Properties {
    ...
    _ParallaxMap ("Parallax", 2D) = "black" {}
    _ParallaxStrength ("Parallax Strength", Range(0, 0.1)) = 0
    ...

Passed them into the shader

...
sampler2D _ParallaxMap; float4 _ParallaxMap_ST;
...
half _ParallaxStrength;
...

And replaced this code

if(weights.x > 0) {
  mixedDiffuse += weights.x * TEXTURE_SAMPLE(_MainTex, y0 * _MainTex_ST.xy + _MainTex_ST.zw) * _Color;
  nrm += weights.x * UnpackScaleNormal(TEXTURE_SAMPLE(_BumpMap, y0 * _BumpMap_ST.xy + _BumpMap_ST.zw), _NormalScale);
  combi += weights.x * TEXTURE_SAMPLE(_Combined, y0 * _Combined_ST.xy + _Combined_ST.zw);
 }
if(weights.y > 0) {
    mixedDiffuse += weights.y * TEXTURE_SAMPLE(_MainTex, x0 * _MainTex_ST.xy + _MainTex_ST.zw) * _Color;
    nrm += weights.y * UnpackScaleNormal(TEXTURE_SAMPLE(_BumpMap, x0 * _BumpMap_ST.xy + _BumpMap_ST.zw), _NormalScale);
    combi += weights.y * TEXTURE_SAMPLE(_Combined, x0 * _Combined_ST.xy + _Combined_ST.zw);
}
if(weights.z > 0) {
    mixedDiffuse += weights.z * TEXTURE_SAMPLE(_MainTex, z0 * _MainTex_ST.xy + _MainTex_ST.zw) * _Color;
    nrm += weights.z * UnpackScaleNormal(TEXTURE_SAMPLE(_BumpMap, z0 * _BumpMap_ST.xy + _BumpMap_ST.zw), _NormalScale);
    combi += weights.z * TEXTURE_SAMPLE(_Combined, z0 * _Combined_ST.xy + _Combined_ST.zw);
}

With this code

float height = weights.x * TEXTURE_SAMPLE(_ParallaxMap, y0 * _MainTex_ST.xy + _MainTex_ST.zw).r;
height += weights.y * TEXTURE_SAMPLE(_ParallaxMap, x0 * _MainTex_ST.xy + _MainTex_ST.zw).r;
height += weights.z * TEXTURE_SAMPLE(_ParallaxMap, z0 * _MainTex_ST.xy + _MainTex_ST.zw).r;
height -= 0.5;
height *= _ParallaxStrength;

float2 mainTexTo = _MainTex_ST.xy + _MainTex_ST.zw;
float2 bumpMapTo = _BumpMap_ST.xy + _BumpMap_ST.zw;
float2 combinedTo =  _Combined_ST.xy + _Combined_ST.zw;
IN.tangentViewDir = normalize(IN.tangentViewDir);
IN.tangentViewDir /= IN.tangentViewDir.z + 0.42;
float2 parallaxOffset = IN.tangentViewDir * height;

float2 mainTexUvX = y0 * mainTexTo + parallaxOffset;
float2 bumpMapUvX = y0 * bumpMapTo + parallaxOffset;
float2 combinedUvX = y0 * combinedTo + parallaxOffset;

float2 mainTexUvY = x0 * mainTexTo + parallaxOffset;
float2 bumpMapUvY = x0 * bumpMapTo + parallaxOffset;
float2 combinedUvY = x0 * combinedTo + parallaxOffset;

float2 mainTexUvZ = z0 * mainTexTo + parallaxOffset;
float2 bumpMapUvZ = z0 * bumpMapTo + parallaxOffset;
float2 combinedUvZ = z0 * combinedTo + parallaxOffset;

if(weights.x > 0) {
    mixedDiffuse += weights.x * TEXTURE_SAMPLE(_MainTex, mainTexUvX) * _Color;
    nrm += weights.x * UnpackScaleNormal(TEXTURE_SAMPLE(_BumpMap, bumpMapUvX), _NormalScale);
    combi += weights.x * TEXTURE_SAMPLE(_Combined, combinedUvX);
}
if(weights.y > 0) {
    mixedDiffuse += weights.y * TEXTURE_SAMPLE(_MainTex, mainTexUvY) * _Color;
    nrm += weights.y * UnpackScaleNormal(TEXTURE_SAMPLE(_BumpMap, bumpMapUvY), _NormalScale);
    combi += weights.y * TEXTURE_SAMPLE(_Combined, combinedUvY);
}
if(weights.z > 0) {
    mixedDiffuse += weights.z * TEXTURE_SAMPLE(_MainTex, mainTexUvZ) * _Color;
    nrm += weights.z * UnpackScaleNormal(TEXTURE_SAMPLE(_BumpMap, bumpMapUvZ), _NormalScale);
    combi += weights.z * TEXTURE_SAMPLE(_Combined, combinedUvZ);
}

This is probably a very dumb way to do this - but it worked well enough that I went with it.

The whole environment was textured with things from Quixel, Poliigon and Substance Source, using the shaders mentioned above

Main Waves Shader

The big wavy thing is a a nice shadertoy ported by SCRN. Which is then rendered to a custom render texture to be reused on all 3 screens via a separate UV channel.

Basically each of screens still has proper UV1,2,3 channels to display any arbitrary image if needed. But UV4 is unwrapped in a way so all 3 screens are stacked together horizontally. Then the special shader used on the screen takes the custom render texture with the wave and renders it on the screen using that 4th UV channel, which makes it show up in one continuous line.

The logo is then added on top using the UV position check (so it doesn't show up on side screens) using UV1 for actual coordinates, and the outline of it is used to mask out the waves texture so it will not cross the letters in an unpleasant way.

The Flags Shader

The flags are a basic vertex shader that uses the famous shadertoy noise texture to create a nice waving pattern. A lower mip at a high scale is used to achieve large, averaged, slow waves on the flag.

This affects both the vertex position, and the normal of the mesh, so that the wave also creates a slight different in lighting for higher-frequency waving

The Postmortem

Now that we have the world, with all the required functionality, looks and presentations in it - it was time for the event. Hoping that nothing would go wrong I went to bed 4 hours before the event start and the rest is history.

What Went Wrong

As mentioned throughout the post - a lot of things did end up breaking, 99% of the time due to my own mistakes, as expected. But here are the main points

  • The Audio Zone role checks were failing due to RoleManager errors, this was often fixable with FixAudio button, but sometimes required a rejoin
  • The Stream account setup was not planned nicely, I had to interrupt the stream to adjust volume of players, and could still hear other players between sessions
  • The Camera System had issues when the previous speaker would leave during the break
  • The Host canvas didn't allow me to boost my voice outside of the main speaker circle which was very inconvinient
  • The Host canvas didn't have a simple "next talk/previous talk" buttons to act as a backup for synced variables
  • General lack of backup solutions for immediate issues like "hey, I can't see this", "hey, this isn't working"
  • There wasn't a nice flow for people to get out into the social zone to chat, and then go back for the next talk
  • There was no way to completely disable the stage zone so I could guide the next speaker through the system without them being heard across the instance

What Went Right

  • The environment seemed to hit the spot when it comes to general look and feel
  • The presentation player worked in 99% of the cases
  • The questions zone seemed to work just fine despite all other audio zone issues
  • People seemed to enjoy the general experience of being at the event... which is the ultimate goal at the end of the day

What I Will Prioritize

With all of the above in mind - I have a rough outline of things I want to do, apart from just addressing the issues

  • The outside area still needs a bit of scale adjustments to feel more cozy
  • The outside area social zone is still needed as a buffer
  • The breaks should be longer. I was worried for the length of the event, but seems like listening to relevant talks turns on time dilation the same way it does in real life, and most people said that 6 hours of VR flew by very quickly, so I think there is some wiggle room for 10 to 15 minute breaks
  • If we're going to have any more talks - multiple instances will be required, or maybe 2 rooms within one instance, if the size permits
  • The event should be on a 4 week cycle, instead of 3 weeks like this time. I didn't have choice due to other events going on, but next time it will definitely have one extra "polish" week

Closing Thoughts

I cannot overstate how happy I am with the result. Even a couple weeks after the event - I already had people reach out, saying that they watched a talk from the event and it helped them with something they're working on, which is the ultimate validation of the whole idea.

There are still a lot of things to do. This website was one of the main goals, and I'm pretty happy with how it turned out so far, but there is plenty of work still left to do even on this.

And while I do not think there will be an event earlier than April/March next year, knowing myself, I'll try to work on this on and off, so I won't have to crunch as hard this time. Even though I know I'll crunch at some point anyways.

If you made it this far - I just want to thank you for your attention. I hope it was at least somewhat interesting / useful to you :) If you have any questions - feel free to reach out on twitter or on my discord server, I am more than happy to go into more detail

Thank Yous

And of course this world would not be possible without all of these amazing people

  • Merlin, due to UdonSharp which is the sole reason I'm doing anything in Udon these days

  • Xiexe, because of all the shaders and references I could borrow from

  • Silent, again, because of the shaders

  • CyanLaser, due to CyanEmu, which kept me sane while testing all of this

  • 1, because of VRWorldToolkit, which kept track of all the dumb mistakes I made

  • All of the 4th P members: IgbarVonSquid, Jordo, Chim-Cham, Akalink, Fins, Peptron, Ruuubick, Legends, Squid, Lakuza, Cyan and Fionna... they kept me motivated and were always supportive and encouraging

  • SCRN, for being there for me ❤

  • All of the Speakers who have gathered their strength to present at the event and did an amazing job at that!

  • All of the VRCPrefabs community for the opportunity to do this to begin with

  • And all the people who watched the event, keep watching the VODs of the talks and share them with their friends

From the bottom of my heart - THANK YOU and until next time!