ptoinson ptoinson - 2 months ago 69
iOS Question

How to use iOS (Swift) SceneKit SCNSceneRenderer unprojectPoint properly

I'm developing some code using SceneKit on iOS and in my code I want to determine the x and y coordinates on the global z plane where z is 0.0 and x and y are determined from a tap gesture. MY setup is as follows:

override func viewDidLoad() {
super.viewDidLoad()

// create a new scene
let scene = SCNScene()

// create and add a camera to the scene
let cameraNode = SCNNode()
let camera = SCNCamera()
cameraNode.camera = camera
scene.rootNode.addChildNode(cameraNode)
// place the camera
cameraNode.position = SCNVector3(x: 0, y: 0, z: 15)

// create and add an ambient light to the scene
let ambientLightNode = SCNNode()
ambientLightNode.light = SCNLight()
ambientLightNode.light.type = SCNLightTypeAmbient
ambientLightNode.light.color = UIColor.darkGrayColor()
scene.rootNode.addChildNode(ambientLightNode)

let triangleNode = SCNNode()
triangleNode.geometry = defineTriangle();
scene.rootNode.addChildNode(triangleNode)

// retrieve the SCNView
let scnView = self.view as SCNView

// set the scene to the view
scnView.scene = scene

// configure the view
scnView.backgroundColor = UIColor.blackColor()
// add a tap gesture recognizer
let tapGesture = UITapGestureRecognizer(target: self, action: "handleTap:")
let gestureRecognizers = NSMutableArray()
gestureRecognizers.addObject(tapGesture)
scnView.gestureRecognizers = gestureRecognizers
}

func handleTap(gestureRecognize: UIGestureRecognizer) {
// retrieve the SCNView
let scnView = self.view as SCNView
// check what nodes are tapped
let p = gestureRecognize.locationInView(scnView)
// get the camera
var camera = scnView.pointOfView.camera

// screenZ is percentage between z near and far
var screenZ = Float((15.0 - camera.zNear) / (camera.zFar - camera.zNear))
var scenePoint = scnView.unprojectPoint(SCNVector3Make(Float(p.x), Float(p.y), screenZ))
println("tapPoint: (\(p.x), \(p.y)) scenePoint: (\(scenePoint.x), \(scenePoint.y), \(scenePoint.z))")
}

func defineTriangle() -> SCNGeometry {

// Vertices
var vertices:[SCNVector3] = [
SCNVector3Make(-2.0, -2.0, 0.0),
SCNVector3Make(2.0, -2.0, 0.0),
SCNVector3Make(0.0, 2.0, 0.0)
]

let vertexData = NSData(bytes: vertices, length: vertices.count * sizeof(SCNVector3))
var vertexSource = SCNGeometrySource(data: vertexData,
semantic: SCNGeometrySourceSemanticVertex,
vectorCount: vertices.count,
floatComponents: true,
componentsPerVector: 3,
bytesPerComponent: sizeof(Float),
dataOffset: 0,
dataStride: sizeof(SCNVector3))

// Normals
var normals:[SCNVector3] = [
SCNVector3Make(0.0, 0.0, 1.0),
SCNVector3Make(0.0, 0.0, 1.0),
SCNVector3Make(0.0, 0.0, 1.0)
]

let normalData = NSData(bytes: normals, length: normals.count * sizeof(SCNVector3))
var normalSource = SCNGeometrySource(data: normalData,
semantic: SCNGeometrySourceSemanticNormal,
vectorCount: normals.count,
floatComponents: true,
componentsPerVector: 3,
bytesPerComponent: sizeof(Float),
dataOffset: 0,
dataStride: sizeof(SCNVector3))

// Indexes
var indices:[CInt] = [0, 1, 2]
var indexData = NSData(bytes: indices, length: sizeof(CInt) * indices.count)
var indexElement = SCNGeometryElement(
data: indexData,
primitiveType: .Triangles,
primitiveCount: 1,
bytesPerIndex: sizeof(CInt)
)

var geo = SCNGeometry(sources: [vertexSource, normalSource], elements: [indexElement])

// material
var material = SCNMaterial()
material.diffuse.contents = UIColor.redColor()
material.doubleSided = true
material.shininess = 1.0;
geo.materials = [material];

return geo
}


As you can see. I have a triangle that is 4 units tall by 4 units wide and set on the z plane (z = 0) centered at x, y (0.0, 0.0). The camera is the default SCNCamera which looks in the negative z direction and I've placed it at (0, 0, 15). The default value for zNear and zFar are 1.0 and 100.0 respectively. In my handleTap method, I take the x and y screen coordinates of the tap and attempt to find the x and y global scene coordinates where z = 0.0. I'm using a call to unprojectPoint.

The docs for unprojectPoint indicate


Unprojecting a point whose z-coordinate is 0.0 returns a point on the
near clipping plane; unprojecting a point whose z-coordinate is 1.0
returns a point on the far clipping plane.


While it does not specifically say that for the points in between there is a liner relationship between the near and far plane, I have made that assumption and calculate the value of screenZ to be the percent distance between the near and far plane that the z = 0 plane is located. To check my answer, I can click near the corners of the triangle because I know where they are in global coordinates.

My problem is that I'm not getting the correct values and I'm not getting consistent values when I start changing the zNear and zFar clipping planes on the camera. So my question is, how can I do do this? In the end, I'm going to create a new piece of geometry and place it on the z-plane corresponding to where the user clicked.

Thanks in advance for your help.

Answer

Typical depth buffers in a 3D graphics pipeline are not linear. Perspective division causes depths in normalized device coordinates to be on a different scale. (See also here.)

So the z-coordinate you're feeding into unprojectPoint isn't actually the one you want.

How, then, to find the normalized-depth coordinate matching a plane in world space? Well, it helps if that plane is orthogonal to the camera -- which yours is. Then all you need to do is project a point on that plane:

let projectedOrigin = gameView.projectPoint(SCNVector3Zero)

Now you have the location of the world origin in 3D view + normalized-depth space. To map other points in 2D view space onto this plane, use the z-coordinate from this vector:

let vp = gestureRecognizer.locationInView(scnView)
let vpWithZ = SCNVector3(x: vp.x, y: vp.y, z: projectedOrigin.z)
let worldPoint = gameView.unprojectPoint(vpWithZ)

This gets you a point in world space that maps the click/tap location to the z = 0 plane, suitable for use as the position of a node if you want to show that location to the user.

(Note that this approach works only as long as you're mapping onto a plane that's perpendicular to the camera's view direction. If you want to map view coordinates onto a differently-oriented surface, the normalized-depth value in vpWithZ won't be constant.)