Each Canvas acts as a renderable collection of UI elements in a scene. You can have more than one in your scene.
Canvas > Panel > UI Elements is a typical nested structure for Unity UI. Examples of UI Elements include Button, Text, Image, Slider, etc. You can even nest a Panel within a Panel, don't be scared.
There are three Render Mode settings for a Canvas. Screen Space – Overlay (2D perspectiveless HUD, rendered above all scene objects), Screen Space – Camera (2D perspective HUD, rendered with variable scene depth), and World Space (3D perspective, with variable scene depth).
The RectTransform is the base component of all UI elements. You set its anchors to control resizing based on its relationship to its parent. You set its Pivot Point to control its 2D position (x, y), which also acts as the position from which scaling and rotating occur.
All UI elements are comprised of Text and Image components in one form or another, that’s it. You can have an Image component be represented by a solid fill color as opposed to an actual Sprite too, it’s no big deal.
All interactive UI elements extend the Selectable class. This class provides an interactable toggle, transition settings (over, down, pressed, etc), and navigation settings (“tab order” for PC keyboards, console controllers, etc).
Use a CanvasGroup component to fade out a collection of UI elements (panel and its children) as one entity. This component also allows you to enable/disable all interactive children at once. CanvasGroup is your friend.
Unity renders UI elements in your Hierarchy view from top to bottom. This means the bottom most UI element in the Hierarchy view is rendered last, and thus above of all other UI elements on screen.
A ToggleGroup component is useful for mimicking “Radio Buttons”. It is best practice to put the ToggleGroup component on the parent of the Toggle instances, though not required... why did I tell you that last part?
When importing PSD files, make sure to update its Texture Type setting to Sprite (2D & UI) if you intend to use the asset for 2D or UI. This is a gotcha that can be frustrating if not set properly.
For custom UI interaction events, have a custom component script implement from UnityEngine.EventSystems.XXX where XXX is a Unity built-in interface. Example interfaces include IPointerEnterHandler, IDragHandler, ISelectHandler, etc. and allow the execution of custom code within the interface’s implemented method.
The Navigation setting on an interactive UI element is used to determine “tab order” which changes the focus of UI elements in controller, keyboard, etc based user interaction models. The Visualize button shows the focus flow in the Scene view.
You can use the EventTrigger component on a Selectable UI element (Button, Slider, etc) to expose additional event types in the Inspector. This is useful if you want to make a UI or scene change dynamically based on user input other than OnClick.
For masking, simply have a Panel (PanelA) nested inside another Panel (PanelB) where PanelA is larger than PanelB. Then simply select PanelB and add a Mask component. This enables PanelA’s content to show through. Remember, the masked area is based on the Source Image property of PanelB's Image component.
For a Mask, the mask area is based on the attached image component. Transparency in the image cuts out the visible area of its children. You can uncheck Show Mask Graphic to hide the graphic itself but still provide the mask area, nice.
For masked scrolling, simply add a ScrollRect component to a parent Panel containing a Mask component. Then set the Content property of the ScrollRect to the child panel whose bounds exceed those of the parent panel. Scrolly time.
For Auto Layouts, simply add a GridLayoutGroup, HorizontalLayoutGroup, or VerticalLayoutGroup component to a Panel and then all children will automatically align. You will likely want to edit this automatic alignment and you can do so with the Start Corner, Start Axis, or Child Alignment properties. The parent controls the “cell size” of the children as well.
A Button can have a GridLayoutGroup, HorizontalLayoutGroup, or VerticalLayoutGroup component to automatically layout its children. This is useful if a single button has numerous visual elements within it. In addition, the children can have a LayoutElement component to set min, max, and preferred sizes that override or inform the layout group to handle the child in a specific way. Go make cooler buttons.
A Button with a HorizontalLayoutGroup component and padding can ensure the button resizes according to the size of its children and no bigger. A LayoutElement component with no flexible width is necessary for this to work.
Set a UI element’s LayoutElement flexible width or flexible height properties to decimal values to allow the parent layout group of each to organize them by “weight” or “percent”. This percentage based layout works until all children only have room to be their preferred width or preferred height. This is super useful and similar to Android’s layout_weight or the Web’s percentage based layouts.
You can use an empty GameObject with a RectTransform instead of a Panel to group UI elements. This is handy if you want a “lighter Panel” that doesn’t require a visual representation itself.
Add a CanvasScaler component to a Canvas to better control resolution variations. It is a good idea, and most common, to set the Screen Match Mode to Width or Height with a value of .5. The Expand and Shrink options are best for background graphics that simply cover the entire screen area (smaller use case, but depends on your game/app).
When dynamically adding UI elements to a scrollable content Panel, a ContentSizeFitter component is of great help. It will make the scrollable area size “snap” to the combined area of the internal UI elements. For this to work, 1) the children UI elements each need a LayoutElement with set preferred sizes (width, height, or both) and 2) the panel with the ContentSizeFitter needs its layout group’s Child Force Expand (width, height, or both) setting(s) disabled.
For each texture, you can set platform overrides. This is useful for changing the graphic size and compression of image assets on a per-platform level.
Unity doesn’t have a built-in pixel-density solution, so sad face. However, you can poll the device or machine using the Screen class to get the resolution, pixel density (dpi), and more. An approach could be 1) get the Screen info 2) calculate a “best fit” (similar to Android’s ldpi, mdpi, hdpi, etc), and 3) ensure all graphic assets are utilized from the “best fit” Resources folder.
To create your own preloader scene, just make the first scene virtually empty (except for what you need to visually represent preloading) and when this preloader scene is loaded, call LoadLevelAsync() or LoadLevelAdditiveAsync() and then query progress and draw the preloader visuals.
It is best practice to use generic lists as opposed to ArrayList. The Unify Wiki has a great breakdown of when to use one over the other, why, and alternatives.
Set up a looping coroutine in the Start() method of a component instead of using the Update() method if updates for a Component don’t need the game/app frame rate (.2f coroutine will run 5 times a second vs 60fps)
If you have a scrollable view that holds a large amount of UI element instances (ex. list items), you’ll likely want to optimize. Conceptually, if you have a max of 10 instances out of 100 visible at any one time; creating all 100 is a waste. Instead, use the Object Pool pattern. This way you only instantiate 10 or so instances and based on scrolling position, you’ll recycle and update each view with the correct data. Android and iOS use this approach to optimize the performance of scrolling long lists.
A Prefab with an attached custom component (ideally by the same name) can hold references to itself, and/or its children, and/or other connected components. This allows a single GetComponent<MyComponent>() call to gain reference to the Prefab instance's connected components or GameObjects whose properties change/update at runtime. This is a very useful approach for succinctly "building" or "inflating" Prefab instances at runtime based on dynamic data.
It is best practice to use a Sprite Sheet (aka Atlas) for your graphics assets in 2D (and 3D) games/apps as opposed to having a one-to-one relationship between a texture and an image file. The core advantage of this approach is performance related. Basically, the device on which your game/app is running can minimize draw calls by UV mapping part(s) of a texture (your Sprite Sheet) multiple times to a geometry. The alternative is a draw call per texture per geometry.
It is a common practice to use an empty GameObject as a “folder” within a scene. This helps you better organize and structure your game in a visual way. A specific approach is to position an empty GameObject at (0, 0, 0) which is then filled with various manager classes, utility classes, and a top-level eventing system. This is a useful approach, especially for persisting components across scene loads.
When in the Scene view, have you ever wanted to make one GameObject point or “look at” another GameObject? To pull this off, simply hover the GameObject in focus with your Rotate Tool selected and then hold ctrl/cmd + shift as you drag the Rotate Gizmo. While dragging, the GameObject you’re rotating will “look at” the GameObject you hover. Things are looking up.
There are times when positioning a GameObject that you want its position to snap to the surface of another GameObject. This is common when wanting to drag an object across a floor for example. You can use Surface Snapping to achieve this. Simply hold down ctrl/cmd + shift prior to selecting and manipulating a GameObject’s position. As you drag, the target surface to snap to updates as you hover different GameObjects.
If you need help aligning GameObjects with vertex precision or to a grid, you need to use Vertex Snapping. To pull this off, select the Translate Tool, hold down the v key, hover the desired vertex of the GameObject in focus, then click and drag. As you drag your GameObject, its position snaps to the hovered vertex of another GameObject. Snappy.
If you want to interpolate values over time in a succinct way, you can use Unity’s built-in Lerp (linear interpolation) methods. Numerous classes have this method or a variation of it. Some common examples are Color.Lerp, Vector2.Lerp, Vector3.Lerp, Mathf.LerpAngle, and Vector3.Slerp. Mmmm… Slerp.
If you repeatedly find yourself editing values only to realize you were in PlayMode when doing so (and thus losing all your edits), raise your hand… actually don’t raise it. Instead use Edit > Preferences > Colors > Playmode Tint to have Unity automatically color-tint the editor interface so you’re visually reminded that you’re in PlayMode. Damn handy.
To improve precision while moving a GameObject in a scene, you can hold down ctrl/cmd while dragging its Translate Gizmo. This enables the position of the GameObject you’re moving to snap in incremental units. Go to Edit > Snap Settings to customize.