User Tools

Site Tools


unity_interactive_bitmaps

Interactive Bitmaps

Introduction

In this lesson, we will be touch-enabling two textures using the gesture processing of GestureWorks 2. We will show you how to touch and drag, rotate, scale, pictures of the GestureWorks logo.

This tutorial builds upon the last where we utilized the point event data to display touch rings on the screen. Making use of gestures is where the real power GestureWorks lies. If you have been following along from the first tutorial, you should have your project set up to connect to GestureWorks 2 and process point data gestures.

NOTE: We are going to keep the touch and mouse points showing as in the previous lesson. While this feature is not a requirement for this tutorial, it will help to aid debugging during this tutorial.

Requirements

  • Estimated time to complete: 15 minutes
  • Project setup with GestureWorks as described in Unity Hello World
  • Microsoft Windows 7, 8, or 10
  • Unity 5 or greater
  • Multitouch display device

Process Overview

  1. Set Up the Gestureworks Logo Texture
  2. Register Touch Objects for Gesture Manipulation
  3. Setup Touch Response
  4. Finishing Up

1. Set Up the Gestureworks Logo Texture

First, be sure to copy the GWLogo.png file into your projects Assets\Textures folder. Next create an Quad GameObject by going to GameObject→3D Object→Quad. Name it Bitmap1 and place it where it is visible by the camera. Drag the GWLogo texture to the object. your scene should look similar to the following:

Next, add a box collider to Bitmap1 (Component > Physics > Box Collider):

Then duplicate Bitmap 1 and rename the new object to Bitmap 2. Arrange them so they are both visible:

2. Register Touch Objects for Gesture Manipulation

Now that we have the 3D objects set up for multitouch interaction, we will now move on to adding the scripts necessary to make the bitmaps touch-enabled. Before we write more code in the main file, Let’s explore the GestureWorks Unity classes imported from the asset package.
Inside the GestureWorks/Helper, you will find class files titled TouchObject.cs and HitManager.cs. The TouchObject is a special class that will be used to make 3D objects touch-enabled. It has public properties that identify gestures associated with this object. TouchObject may only be extended and is enforced with being defined as abstract.

The HitManager class is used to find touch point intersections of 3D objects in the scene. Unity’s built-in Raycast method is used to detect hits. More optimization of this class is encouraged for scenes with more objects in them.

Defining a touch object

The next step is to create a class file that inherits from TouchObject. We will call this class file TouchImage.cs. The class should have the following defined:

using System.Collections;
using System.Collections.Generic;
using UnityEngine;

public class TouchImage : TouchObject 
{
    void Awake()
    {
        SupportedGestures = new string[3] { "ndrag", "nrotate", "nscale" };
    }

    public void ndrag(GestureWorks.GestureInfo gesture)
    {
    }

    public void nrotate(GestureWorks.GestureInfo gesture)
    {

    }

    public void nscale(GestureWorks.GestureInfo gesture)
    {

    }
}

This code registers a set of gesture to process in the Awake method, they also could have been set in the Unity Editor. The class then contains methods with the name of the gesture to process. To see what gestures are available by default see Default Gestures.

Add this script to the two bitmap game objects.

3. Setup Touch Response

Now that we have gesture processing setup, we need to define how the method handlers respond when gestures occur. When we setup TouchImage we only created method stubs. In this section you will need to fill out the methods for TouchImage with the following:

public class TouchImage : TouchObject 
{
    void Awake()
    {
        SupportedGestures = new string[3] { "ndrag", "nrotate", "nscale" };
    }

    public void ndrag(GestureWorks.GestureInfo gesture)
    {
        float multiplier = 1.0f;

        float dX = gesture.value("drag_dx") * Screen.width;
        float dY = gesture.value("drag_dy") * Screen.height * Flipped;

        Camera cam = Camera.main;

        Vector3 previousPosition = cam.WorldToScreenPoint(transform.position);
        Vector3 nextPosition = new Vector3(dX, dY, 0.0f);
        Vector3 newPosition = previousPosition + nextPosition;
        transform.position = cam.ScreenToWorldPoint(newPosition);
    }

    public void nrotate(GestureWorks.GestureInfo gesture)
    {
        float dTheta = gesture.value("rotate_dtheta");

        transform.Rotate(0, 0, -dTheta);    
    }

    public void nscale(GestureWorks.GestureInfo gesture)
    {
        const float scaleMin = 0.1f;

        float scaleDX = gesture.value("scale_dsx") * Screen.width;
        float scaleDY = gesture.value("scale_dsy") * Screen.height;
        float scale = ((scaleDX + scaleDY) * 0.5f)  * 2.0f; // Keep scale uniform 

        Vector3 newScale = transform.localScale + new Vector3(scale, scale, 0.0f);
        newScale.x = Mathf.Max(newScale.x, scaleMin);
        newScale.y = Mathf.Max(newScale.y, scaleMin);
        newScale.z = Mathf.Max(newScale.z, scaleMin);

        transform.localScale = newScale;
    }
}

Each method handler makes use of the GestureWorks.GestureInfo parameter to manipulate the object. We use the change in property value (deltas) and map it to the right 3D object properties.

Notice that the Y axis needs to be flipped to correspond to Unity’s 3D world coordinates.

4. Finishing up

Everything should be set to build and run. Touch the bitmaps with different gestures and see how the image updates in a natural and expected way.
There will be a future lesson with a 3D clock that introduces how more constraints and object manipulators may be added to objects.

unity_interactive_bitmaps.txt · Last modified: 2019/01/21 16:34 (external edit)