开发者

Placing augmented reality photos on mobiles

开发者 https://www.devze.com 2023-02-08 02:36 出处:网络
I would like to overlay geolocalized photos开发者_JS百科 over the camera input of a mobile device. The photos should be placed as accurately as possible over the corresponding location, by modifying i

I would like to overlay geolocalized photos开发者_JS百科 over the camera input of a mobile device. The photos should be placed as accurately as possible over the corresponding location, by modifying its size, tilt and rotation.

Can this be done on iPhone/Android or is the technology not there yet? If the former, are there any frameworks or examples that can be used as a head start?


There are a couple of gotchas in an application like that.

The first is orientation metadata: there's actually no standard location in EXIF headers for all the required elements of camera pose. See p46 of the Exif 2.2 standard. There is a field for "Direction of Image" that is used for bearing, but that's all. I've written applications which have used other fields of the correct size (Bearing of Destination for instance - of type Rational) to store pitch and yaw, but it's not standard, and even if applications store pitch and yaw metadata here, you'd have to experiment to see how they do it, and perhaps adjust depending on which device took the photo.

Size is also tricky. Imagine you have a tele-zoom, and you take a very zoomed-in photo. If you wish to represent this photo with a one-to-one mapping from photo size to the apparent size of the objects in the photos, it should be tiny. This is an extreme example, but you see the apparent size depends on focal length, image sensor size etc. If you just want to present an image at the correct orientation and position, but as a constant-size icon, that's much easier. You might choose a solution in between. In an AR touch interface, selectable items should be big - at least finger-tip size to compensate for sensor-error movements, you might take this into account if you plan to make your images selectable.

Finally, don't worry about trying to solve the second point too perfectly, because currently, mobile phone compasses are awful. I know Nokia give feedback of the compass-calibration level which can be used to determine if you need to perform calibration gestures, but e.g. iPhones don't. If you just take an iPhone from your pocket and shoot, the chances of having good compass accuracy are low. Then you're going to suffer from the usual GPS error. If you're placing your photos manually you can avoid these problems, or course.

If interested in writing an article, I can send you workshop paper and conference poster of mine that might be relevant.


it can be done perfectly with android, the android framework has everything you need to achieve this.

first that all you have the camera streaming to make the base view. you can add, above the camera view, any view that implements SensorListener (can be an accelerometer sensor or orientation sensor) and perform onSensorChanged() method to update itself and any nested child. in this view you can add any photos and calculate orientation, distance, azimutal angle to the user with the gps user coordinates.

finally there are many tutorials to read about this, maybe you can google them.

0

精彩评论

暂无评论...
验证码 换一张
取 消

关注公众号