Hopingsomeone can help me with this. I'm trying to create an AR experience for a prefab house builder. I've made a video showing my reality composer screen and viewing the files on iphone viewable here:
The first thing I'm showing on my iphone is the .reality file. You can see at first it has issues placing the blue hexagon shape but then locates correctly after I move the phone left to right. The main problem here is that it doesn't bring in the house at all. You can see when I switch to object view that only the blue hexagon is present - no house. The other problem is that the model loses its tether to the image target as soon as the image is out of view of the camera. This obviously wont work for something like a prefab house walkthrough because people want to walk all around and through the house on site.
Now for the .usdz export. As you can see the house is present in object view although it is tipped on its side. I believe I can fix this issue with the Revit exporter. It did bring the house in like this in reality composer but I used the rotation tools to make it sit flat and orient it with the image target. The other problem is that in AR the image tracking doesn't seem to be working at all. I did notice a slight vibration when the phone saw the image but no blue hexagon or house was visible.
For some background I modeled this house in Revit and exported it as a usdz file using a plugin. I'm running reality composer version 1.5 in xcode. I'm trying to develop a procedure for this so I can do it for clients often. Please help! Thanks.
This issue remains. The issue being that for some reason the house model is still not showing up in my scene after export. To illustrate this I'm attaching 2 photos. The first is showing my scene in reality composer with the house shrunk to 10% and just to the right of the stool. The second photo shows this scene as it appears on the iphone as a .reality file export. The house is not included for some reason. Still a mystery to me. Could it be a file size limit it is not telling me about?
Curious if there's a solution to this issue as I too have the same situation. I've exported a .usdz from KeyShot 10, then imported into Reality Composer for Mac OS/Desktop. I can see my model in Reality Composer on Desktop but when I export the project and/or scene and open in Reality Composer on iOS/iPhone, the model does not appear. I also tried exporting from KeyShot as a .usdz with the option to export as a compressed GLB/GLFT using Draco. All exports were at 50dpi, 16 samples, and preferred geometry node checked. Any ideas on how to resolve this are welcome.
In a second grade classroom, we decided this year to combine our informative writing unit with augmented reality. Each student was able to choose an animal that they wanted to research. Then in a collaborative Keynote, students each drew their animal and recorded audio of themselves reading their informative writing. Then the teacher exported the images with transparent background and the audio files. In Reality Composer, the pictures were added to with behaviors so that tapping the image would trigger audio of the student reading their informative writing. The file was then shared with students. Students were given a "map" to track which animals they could learn about. They then went outside to an open field and explored their AR Zoo learning from their peers about each of the animals. This was this classes first experience learning with AR.
Students did not have Reality Composer on their devices. We created the AR experience in Reality composer using a Macbook and exported it as a .reality filetype. This allowed us to airdrop the file to students and they could open it using AR Quick Look which is built into the device. While this worked well, we had to be right up next to the animal drawing before the tap gesture would trigger the audio. Had students had Reality Composer on their devices, it would have eliminated some of the implementation issues that we ran into.
Whenever we go into capture mode with the Reality Composer Pro tool, things only render out of one eye on the device and look all warped. The resulting capture only shows the passthrough hands and a single horizontal line of visuals.
thanks for the response! I have used the on device recorder functionality, but it does leave something to be desired for marketing materials. Would be great if this could be supported in the future - I just submitted it as an idea for the roadmap.
Just to be clear, this thread/forum is about fully immersive VR apps built with Unity. This app mode does not use Polyspatial, which is our technology for Unity to use RealityKit for rendering in mixed reality. Are you trying to record a mixed reality app with Reality Composer Pro? As far as I know, that should work just fine.
Reality Composer is an Augmented Reality(AR) prototyping tool by Apple. You can create 3D content that interacts with the world around you using Apple mobile devices - iPhone or iPad. It is convenient in cases like online shopping, when your website visitors see, how would the product look in their home. Reality Composer is an additional tool to Reality Kit - a framework that translates an AR file into data we can work with.
USDZ is a 3D file developed by Apple. And this is the only type of 3D file the Reality Composer supports for now. That's why you might need to convert other types of 3D files to USDZ. Vectary is able to convert more than 60 types of 3D files to USDZ. Additionally, the powerful 3D editor helps you customize the file and optimize it.
Tip: If you already saved your model to your desktop and you are not using the Mac version of the Reality Composer, use AirDrop or email to transfer the USDZ file to your iPad or iPhone, where your Reality composer app is located.
Maybe you have already found out that this Augmented Reality can be experienced on iOS devices only. The technology is called ARkit, and it is great. The only limit is that it's not working on Android devices, which use another AR technology - AR core. Luckily, Vectary offers a WebAR viewer that connects both of those technologies. It works as simply as embedding a YouTube video on your website. No external app is needed.
At Augmentop, we have built dozens of Augmented Reality prototypes for clients and for fun, and we often needed a week or two to finish each prototype: laying out 3D scenes, creating user interactions, and adding basic animations to 3D objects.
And just like Apple Keynote is being used to prototype web and mobile apps, Reality composer will similarly help product designers who want to get into AR, product managers who want to create requirements and specs for AR apps, and entrepreneurs who want to prototype new AR ideas without having to hire outside help to do it.
The tool brings the familiarity, simplicity and ease of use of Keynote into Augmented Reality, enabling you to focus on the basic tasks of prototyping, without getting lost in the detail of modeling, animation, and writing code.
At a high level, prototyping AR apps in Reality Composer is very similar to prototyping mobile apps in Keynote: create scenes (instead of slides), place shapes, add interactions, and then create animations.
Before jumping into Reality Composer, create a couple of sketches where you capture the high-level layout and design of your experience, and to storyboard the user interactions and annotate animations.
Useful hack: Even though paper is an excellent medium to sketch your 3D/AR experiences, we found that sketching some AR experiences in VR using TiltBrush and Google Blocks was more fun, more productive, and more accurate. If you have a VR headset, we recommend using it to sketch AR experiences in 3D space.
Create a new project in Reality Composer, and select the type of anchor you want to have the scene attached to (horizontal surface, vertical surface, image, face or scanned 3D object). Each scene in your Reality Composer project can be attached to a different anchor, but you should generally keep it consistent across all themes. You may use different anchor images for different scenes to show how the prototype works for various examples.
We highly recommend naming all your objects by selecting them on the scene and editing their names in the properties panel. This makes selecting them faster from the right-click menu when the scene gets more complex and more crowded.
We also found that creating 2D assets in Sketch or Figma, exporting them as PNGs with transparent backgrounds, and then dragging-and-dropping them into Reality Composer, saves a lot of time that would be otherwise required to create their 3D counterparts.
Useful hack: You can actually prototype an entire AR experience in Reality Composer by dragging in a bunch of PNGs from the web, or by creating them in Photoshop or Sketch, importing them into Reality Composer, adjusting their location, size and orientation, and adding interactivity and animations to simulate the full experience. This is similar to creating 2D cutouts of 3D models to pre-visualize a game or a movie. Once everything looks good, replace those images with 3D assets, and fine-tune them.
Each behavior consists of a trigger (when to execute the behavior) and a sequence of actions (what to do when the trigger is triggered). For our prototype, the triggers we used the most are on Scene Start and on Tap, and the action we use is Change Scene.
We use Screen Start triggers to hide some shapes once the scene is displayed, or to animate them to their initial positions. We use Tap triggers to transition from one scene to another using Change Scene action. This is similar to creating hyperlinks in Keynote to transition from one UI screen to another.
However, some experiences, like games and simulations, require animation in order to be tested with users. Other experiences can benefit from animations to keep users in context while transitioning from one scene to another.
3a8082e126