marker based ar with monogame and wp8

Marker based Augmented Reality is really easy thanks to toolkits like SLAR. A bigger challenge is when you want to use those toolkits in a Monogame application. There’s a lot of information out there on how to do this in the classic Silverlight / XNA mashup we used to have in Windows Phone 7 but since XNA isn’t supported in WP8 and Monogame / Silverlight combinations aren’t possible I embarked on a journey to get this done.


Before I start with this post I would like to give credits to the three articles / demo apps that helped me create this post.

All of these articles contain code that can be found in my demo solution attached to this post.

Displaying the phone’s camera feed

In a XAML application it’s easy to get the camera feed displayed in the app, a videobrush attached to a Canvas or a Rectangle and done. In a Monogame application we’ll have to do a bit more work.

We’ll start with a blank Monogame Windows Phone 8 game.

Note: the current Monogame templates included in the installer are only suited for Visual Studio 2012. However, once the game’s created you can simply open the solution in Visual Studio 2013.

First thing to do is enable the correct capability. In the WMAppManifest GUI, in the Capabilities tab, check the ID_CAP_ISV_CAMERA checkbox.

From here on out, everything we’ll be doing will be in the Game1.cs class. The GamePage files are just for initializing Monogame and rendering the game. Running the app at this point should just give you a nice blue background.

If the app shows it’s pretty blue background, it’s time to declare some private fields in the Game1 class.

Code Snippet
  1. //camera preview
  2. private PhotoCaptureDevice _photoDevice;
  3. private Texture2D _previewTexture;
  4. private bool _newPreviewFrameAvailable;
  5. private int _backBufferXCenter;
  6. private int _backBufferYCenter;
  7. private int _textureYCenter;
  8. private int _textureXCenter;
  9. private float _yScale;
  10. private float _xScale;
  11. private int[] _previewData2;
  12. private int[] _previewData1;
  13. private bool _isFocussing;

Let’s go over these fields

  • _photoDevice will be our access to the phone’s camera
  • _previewTexture will hold the frame currently being drawn, coming from the camera’s previewbuffer
  • _newPreviewFrameAvailable is a flag that will be set to true whenever a new frame is ready to be fetched and drawn
  • _backBufferXCenter and _backBufferYCenter: these 2 fields together form the middle point of the device’s screen, we need this to position the preview image in the middle of the screen
  • _textureYCenter and _textureXCenter: these 2 fields together form the middle point of the preview image.
  • _yScale and _xScale will contain the height and width scale so that we can draw the preview image full screen
  • _previewData1 and _previewData2 will hold the new and previous pixels from the camera’s preview buffer, we need to hold both to prevent one from being overwritten with a new frame while still being drawn
  • _isFocussing is a flag that prevents the focus function of the camera being called multiple times.

The next step is the Initialize method, note that this method needs to be overridden from the base Game class that Game1 inherits. The method itself gets called automatically at game’s start.

Code Snippet
  1. protected override async void Initialize()
  2. {
  3.     _spriteBatch = new SpriteBatch(GraphicsDevice);
  5.     Size previewSize = PhotoCaptureDevice.GetAvailablePreviewResolutions(CameraSensorLocation.Back)[0];
  6.     Size captureSize = PhotoCaptureDevice.GetAvailableCaptureResolutions(CameraSensorLocation.Back)[0];
  8.     CreateTexture((int)previewSize.Width, (int)previewSize.Height);
  10.     _previewData1 = new int[_previewTexture.Width * _previewTexture.Height];
  11.     _previewData2 = new int[_previewTexture.Width * _previewTexture.Height];
  13.     _photoDevice = await PhotoCaptureDevice.OpenAsync(CameraSensorLocation.Back, captureSize);
  14.     _photoDevice.PreviewFrameAvailable += photoDevice_PreviewFrameAvailable;
  16.     _backBufferXCenter = GraphicsDevice.Viewport.Width / 2;
  17.     _backBufferYCenter = GraphicsDevice.Viewport.Height / 2;
  18.     _textureXCenter = _previewTexture.Width / 2;
  19.     _textureYCenter = _previewTexture.Height / 2;
  20.     _yScale = (float)GraphicsDevice.Viewport.Width / _previewTexture.Height;
  21.     _xScale = (float)GraphicsDevice.Viewport.Height / _previewTexture.Width;
  23.     base.Initialize();
  24. }

First we initialize the spritebatch, this class will be responsible for drawing the 2D textures, meaning in this case, the camera preview.

Next we get the preview size and capture size from the camera in the phone. We create a texture with the CreateTexture method (explained a bit lower) and declare the two arrays that will hold the current and previous frames.

The camera is launched asynchronously on line 13, hooking up the event handler for the PreviewFrameAvailable event on line 14.

Next, the center points for both the device’s screen and the preview texture are calculated, followed by calculating the scale.

Here’s the CreateTexture method

Code Snippet
  1. private void CreateTexture(int textureWidth, int textureHeight)
  2. {
  3.     _previewTexture = new Texture2D(GraphicsDevice, textureWidth, textureHeight);
  4.     Color[] data = new Color[textureWidth * textureHeight];
  6.     for (int i = 0; i < textureWidth * textureHeight; i++)
  7.     {
  8.         data[i] = Color.White;
  9.     }
  11.     _previewTexture.SetData(data);
  12. }

This method just created a white texture, the size of what we expect the preview frames to be.

Next up is the event handler for the PreviewFrameAvailable event

Code Snippet
  1. void photoDevice_PreviewFrameAvailable(ICameraCaptureDevice sender, object args)
  2. {
  3.     _newPreviewFrameAvailable = true;
  4. }

This sets a flag to true, this flag will be checked in the Draw method to prevent synchronization problems between threads.

Almost time to show something on screen! Here’s the Draw method, note that this is also an overridden method. In Monogame Update and Draw are the game loop, they are called multiple times per second. Update is where you would update the world, check for collisions, … While Draw is where all the graphical drawing logic sits.

Code Snippet
  1. protected override void Draw(GameTime gameTime)
  2. {
  3.     if (_newPreviewFrameAvailable)
  4.     {
  5.         GraphicsDevice.Clear(Color.CornflowerBlue);
  7.         // a new frame is available, get it from the previewbuffer
  8.         _photoDevice.GetPreviewBufferArgb(_previewData2);
  10.         //camera uses RGB, Texture2D uses BGR, swap color channels
  11.         SwapRedBlueChannel(_previewData2);
  13.         var previewDataTemp = _previewData1;
  14.         _previewData1 = _previewData2;
  15.         _previewData2 = previewDataTemp;
  17.         //Convert the pixel array to a texture
  18.         _previewTexture.SetData(_previewData1);
  19.         _newPreviewFrameAvailable = false;
  20.     }
  22.     //draw the previewframe
  23.     _spriteBatch.Begin();
  24.     _spriteBatch.Draw(_previewTexture, new Vector2(_backBufferXCenter, _backBufferYCenter), null,
  25.         Color.White,
  26.         (float)Math.PI / 2.0f, new Vector2(_textureXCenter, _textureYCenter),
  27.         new Vector2(_xScale, _yScale), SpriteEffects.None, 0.0f);
  28.     _spriteBatch.End();
  30.     base.Draw(gameTime);
  31. }

First thing we’ll do is check for the flag that is set in the PreviewFrameAvailable event. If that’s true we fetch the ARGB preview buffer from the device. That buffer is an integer array, we pass in one of our two integer arrays, it will get filled with the buffer’s data. A problem we’re having here is that the camera returns RGB values while Texture2D uses BGR values. SwapRedBlueChannel is a small method that swaps those channels. Feel free to comment that line out and see for yourself what it does, all blue colors will show up red on your phone’s screen and vice versa. This method is detailed a bit lower on this page.

The next part is swapping the current frame with the previous frame, this is done to prevent a frame that is currently drawn on screen to be overwritten by a new one.

The SetData method on Texture2D takes in an array and will use that data to create the texture’s image.

And finally, we clear the flag again to wait for the next available frame. We are now ready to draw the image on screen. The drawing is done using the spritebatch, all drawing should happen between spritebatch.Begin() and spritebatch.End().

The Draw method has several overloads. The overload we’re using here gives us the ability to rotate and scale the texture. We need this as Monogame on Windows Phone currently has no landscape support.

Let’s break down the parameters for the Draw method.

  • _previewTexture: our Texture2D that got filled with the preview frame’s data, this is the texture that will get drawn on screen
  • new Vector2(_backBufferXCenter, _backBufferYCenter): The position where the texture will get drawn. By default Monogame uses the upper left corner of the texture to position it, in this overload of the Draw method we can change that upper left point to something else as you’ll see in a few parameters
  • null: we don’t need a source rectangle here, just pass in null
  • Color.White: Draw this texture in its original colors
  • (float)Math.PI / 2.0f: this is the rotation, it rotates the entire image 90 degrees, moving us from portrait to landscape mode
  • new Vector2(_textureXCenter, _textureYCenter): origin point, this moves the rotation and location point from the upper left corner of the texture to its center
  • new Vector2(_xScale, _yScale): scales the image to be fullscreen
  • SpriteEffects.None: no extra effects needed
  • 0.0f: default depth

With all this in place, run the game and you should see your camera image being drawn full screen inside a game. However, the image is not focusing. An image in focus is pretty important for marker detection I believe. Let’s implement that a screen tap focuses the camera.

Focusing the camera

First we need to enable the tap gesture in the game. In the Game1 constructor add this line.

Code Snippet
  1. TouchPanel.EnabledGestures = GestureType.Tap;

The logic will go in the Update part of our gameloop, once again an overridden method.

Code Snippet
  1. protected override async void Update(GameTime gameTime)
  2. {
  3.     if (_photoDevice == null) return;
  5.     //if a touch event is available, focus the camera
  6.     if (TouchPanel.IsGestureAvailable)
  7.     {
  8.         if (TouchPanel.ReadGesture().GestureType == GestureType.Tap && !_isFocussing)
  9.         {
  10.             _isFocussing = true;
  11.             await _photoDevice.FocusAsync();
  12.             _isFocussing = false;
  13.         }
  14.     }
  16.     base.Update(gameTime);
  17. }

First we check if our camera is already initialized, we can’t focus something that doesn’t exist yet. If a gesture is available and it is a tap gesture we set the is focusing flag to true to prevent another focus call when one is in progress. The focusing itself is as easy as calling the asynchronously FocusAsync function on the camera. Reset the flag and done. The camera should now focus whenever you tap the screen in the game.

Now that we have our camera in place, it’s time for the fun stuff. The marker detection!

Augmented Reality

As mentioned in the beginning of this article, we’re going to use the SLAR toolkit. The problem is that the current released version of SLAR (released in May 2010) isn’t compatible with our Windows Phone 8 project. Luckily we can just pluck the code from its Codeplex page, recompile it and it just works. I took the lazy way out and just added the SLAR project to my solution. You’ll notice that SLAR depends on another library called Matrix3DEx that has the same compatibility issue, luckily for us that project also lives on Codeplex. Here are the links

My solution currently looks like this

Also make sure to copy the folders Common and CommonData to the folder where your solution lives or the project won’t compile. Don’t forget to add a reference to your game for the SLAR project.

Back to the code, in the Game1 class we’ll need some extra fields

Code Snippet
  1. private GrayBufferMarkerDetector _arDetector;
  2. private bool _isInitialized;
  3. private bool _isDetecting;
  4. private byte[] _buffer;
  5. private DetectionResult _markerResult;

The first field is the detector, there are several kind of detectors in SLAR, we’re using the GrayBuffer one here. We need two flags to show that everything is initialized and if a detection is currently running. Last but not least we need a byte array that will store the frame that we’re currently scanning for markers. The detectionresult will hold the result of every detected marker so that we can use it to position our model.

Next we’ll initialize all the AR related bits, I’ve put this in a separate method that gets called from the existing Initialize method.

Code Snippet
  1. private void InitializeAR()
  2. {
  3.     //  Initialize the Detector
  4.     _arDetector = new GrayBufferMarkerDetector();
  6.     // Load the marker pattern. It has 16x16 segments and a width of 80 millimeters
  7.     var marker = Marker.LoadFromResource("data/Marker_SLAR_16x16segments_80width.pat", 16, 16, 80);
  9.     // The perspective projection has the near plane at 1 and the far plane at 4000
  10.     _arDetector.Initialize((int)_photoDevice.PreviewResolution.Width, (int)_photoDevice.PreviewResolution.Height, 1, 4000, marker);
  12.     _isInitialized = true;
  13. }

The way SLAR works is that it loads in a pattern(*.pat) file. That pattern gets searched for in every detect call. You can create your own patterns or use prebuild ones. I’m using one that comes with the SLAR samples. Make sure that the pattern is included with your solution and that its build action is set to Resource. Line 7 loads the pattern, line 10 initializes our detector, passing in the expected resolution, the near and far planes and the marker.

And now for the magical piece of code that does the actual detecting

Code Snippet
  1. private void Detect()
  2. {
  3.     if (_isDetecting || !_isInitialized)
  4.     {
  5.         return;
  6.     }
  8.     //Here is where we try to detect the marker
  9.     _isDetecting = true;
  11.     try
  12.     {
  13.         // Update buffer size
  14.         var pixelWidth = _photoDevice.PreviewResolution.Width;
  15.         var pixelHeight = _photoDevice.PreviewResolution.Height;
  16.         if (_buffer == null || _buffer.Length != pixelWidth * pixelHeight)
  17.         {
  18.             _buffer = new byte[System.Convert.ToInt32(pixelWidth * pixelHeight)];
  19.         }
  21.         // Grab snapshot for the marker detection
  22.         _photoDevice.GetPreviewBufferY(_buffer);
  24.         //Detect the markers
  25.         _arDetector.Threshold = 100;
  26.         var dr = _arDetector.DetectAllMarkers(_buffer, System.Convert.ToInt32(pixelWidth), System.Convert.ToInt32(pixelHeight));
  28.         //Set the marker result if the marker is found
  29.         _markerResult = dr.HasResults ? dr[0] : null;
  30.     }
  31.     finally
  32.     {
  33.         _isDetecting = false;
  34.     }
  35. }

The Detect method will get called from a timer, more on that a bit lower in the article.

First we’ll check if it’s okay to do detection, a detection cannot be in progress and everything should be initialized. Then we’ll set the is detecting flag to true.

We’ll keep the width and height of the camera’s preview resolution in two variables and initialize the buffer if necessary. We fill the buffer with the luminance data from the camera by calling the GetPreviewBufferY method, this differs from the method we’re using to show the camera stream. The luminance data is sufficient for SLAR to do its detection. Then we pass in the buffer to the marker detector, together with the frame’s width and height. If a result is found we’ll keep it in the DetectionResult field, if not we set the field to null. As last part we clear the is detecting flag so we are ready to detect again.

Very easy to use, SLAR takes care of all the rest. All we need to do is call the Detect method. In the overridden Initialize method, add this.

Code Snippet
  1. InitializeAR();
  3. //marker detection sits on another counter than the update / draw mechanism to prevent excessive detection
  4. Deployment.Current.Dispatcher.BeginInvoke(() =>
  5. {
  6.     //Runt the detection separate from the update
  7.     var dispatcherTimer = new DispatcherTimer { Interval = TimeSpan.FromMilliseconds(100) };
  8.     dispatcherTimer.Tick += (sender, e1) => Detect();
  9.     dispatcherTimer.Start();
  10. });

The detection runs separately from the gameloop to prevent excessive calls to the detect method. I only want to detect every 100 milliseconds.

When you run the game now and point the camera to the marker, nothing happens. That’s perfectly normal. The marker is getting detected but we’re not doing anything with the detection results yet. Let’s add a 3D model to our game and position it on the marker.

Adding the model

For the 3D model I choose a model of the Tardis I found online (if you don’t know what the Tardis is, go out and buy all the Doctor Who dvd boxes you can find and lock yourself in your room for a few months. Thank me afterwards).

To use this in Monogame you’ll need to push it through either the XNA or the Monogame pipeline to convert it into an XNB file. I’m not going to detail how to do this here, lots of info out there. If you want a quick start, grab the XNB file from my demo project.

Add a folder called Content to your solution, add the XNB file in there and set its build action to Content. Next, we’ll once again add some fields to the Game1 class.

Code Snippet
  1. private Vector3 _modelPosition;
  2. private Vector3 _cameraPosition;
  3. private Model _tardis;
  4. private float _aspectRatio;

The names speak for themselves, we’ve got two vectors, one for the position of the Tardis, one for the position of the camera. We’ve got the Tardis model and a field that holds the aspect ratio.

In the Initialize method, right before the call to InitializeAR add these lines.

Code Snippet
  1. _aspectRatio = _graphics.GraphicsDevice.Viewport.AspectRatio;
  2. _modelPosition = Vector3.Zero;
  3. _cameraPosition = new Vector3(0, 0, 50);
  5. InitializeAR();

These are just basic vectors that we’ll use to calculate the actual position on screen where we need to render our model.

Next, we’ll need to load the model into memory. This is done in the overridden LoadContent method in the Game1 class.

Code Snippet
  1. protected override void LoadContent()
  2. {
  3.     _tardis = Content.Load<Model>("tardis");
  5.     base.LoadContent();
  6. }

There’s no need to specify that the model lives in the Content folder as Monogame assumes that the project contains a folder called Content and that’s where it looks for XNB files.

Before we go into the draw logic of the model, there’s one problem that we’ll need to tackle. SLAR is using Matrix3D classes while Monogame has its own Matrix class. We’ll need a way to convert Matrix3D to Matrix. Here’s an extension method that does just that.

Code Snippet
  1. public static class MatrixConverter
  2. {
  3.     /// <summary>
  4.     /// Convert a Silverlight matrix into an Xna matrix
  5.     /// </summary>
  6.     /// <param name="matrix"></param>
  7.     /// <returns></returns>
  8.     public static Matrix ToXnaMatrix(this System.Windows.Media.Media3D.Matrix3D matrix)
  9.     {
  10.         var m = new Matrix(
  11.            (float)matrix.M11, (float)matrix.M12, (float)matrix.M13, (float)matrix.M14,
  12.            (float)matrix.M21, (float)matrix.M22, (float)matrix.M23, (float)matrix.M24,
  13.            (float)matrix.M31, (float)matrix.M32, (float)matrix.M33, (float)matrix.M34,
  14.            (float)matrix.OffsetX, (float)matrix.OffsetY, (float)matrix.OffsetZ, (float)matrix.M44);
  16.         return m;
  17.     }
  18. }

Now, onto the Draw method. The position of the following code is really important. Monogame draws its stuff in the order that you feed it its instructions. Meaning that we first need to draw the camera feed, then the Tardis model. That way the model will be nicely overlaid over the image.

In the Draw method, after Spritebatch.End and before base.Draw add these lines

<div id="scid:9ce6104f-a9aa-4a17-a79f-3a39532ebf7c:de390e34-7fe7-4148-b7c6-a07a3ec84c1e" class="wlWriterEditableSmartContent" style="float: none; padding-bottom: 0px; padding-top: 0px; padding-left: 0px; margin: 0px; display: inline; padding-right: 0px"> <div style="border: #000080 1px solid; color: #000; font-family: 'Courier New', Courier, Monospace; font-size: 10pt"> <div style="background: #000080; color: #fff; font-family: Verdana, Tahoma, Arial, sans-serif; font-weight: bold; padding: 2px 5px">Code Snippet</div> <div style="background: #ddd; max-height: 500px; overflow: auto"> <ol start="1" style="background: #ffffff; margin: 0 0 0 2.5em; padding: 0 0 0 5px;"> <li><span style="background:#ffffff;color:#0000ff">if</span><span style="background:#ffffff;color:#000000"> (_markerResult != </span><span style="background:#ffffff;color:#0000ff">null</span><span style="background:#ffffff;color:#000000">)</span></li> <li style="background: #f3f3f3"><span style="background:#ffffff;color:#000000">{</span></li> <li>    <span style="background:#ffffff;color:#000000"></span><span style="background:#ffffff;color:#008000">//a marker is detected, draw the Tardis model</span></li> <li style="background: #f3f3f3">    <span style="background:#ffffff;color:#000000"></span><span style="background:#ffffff;color:#0000ff">var</span><span style="background:#ffffff;color:#000000"> result = _markerResult;</span></li> <li>    <span style="background:#ffffff;color:#000000">_graphics.GraphicsDevice.DepthStencilState = </span><span style="background:#ffffff;color:#2b91af">DepthStencilState</span><span style="background:#ffffff;color:#000000">.Default;</span></li> <li style="background: #f3f3f3">    <span style="background:#ffffff;color:#000000">_graphics.GraphicsDevice.BlendState = </span><span style="background:#ffffff;color:#2b91af">BlendState</span><span style="background:#ffffff;color:#000000">.Opaque;</span></li> <li>    <span style="background:#ffffff;color:#000000">_graphics.GraphicsDevice.SamplerStates[0] = </span><span style="background:#ffffff;color:#2b91af">SamplerState</span><span style="background:#ffffff;color:#000000">.LinearWrap;</span></li> <li style="background: #f3f3f3">&nbsp;</li> <li>    <span style="background:#ffffff;color:#000000"></span><span style="background:#ffffff;color:#008000">// Copy any parent transforms.</span></li> <li style="background: #f3f3f3">    <span style="background:#ffffff;color:#000000"></span><span style="background:#ffffff;color:#2b91af">Matrix</span><span style="background:#ffffff;color:#000000">[] transforms = </span><span style="background:#ffffff;color:#0000ff">new</span><span style="background:#ffffff;color:#000000"> </span><span style="background:#ffffff;color:#2b91af">Matrix</span><span style="background:#ffffff;color:#000000">[_tardis.Bones.Count];</span></li> <li>    <span style="background:#ffffff;color:#000000">_tardis.CopyAbsoluteBoneTransformsTo(transforms);</span></li> <li style="background: #f3f3f3">&nbsp;</li> <li>    <span style="background:#ffffff;color:#000000"></span><span style="background:#ffffff;color:#008000">// Draw the model. A model can have multiple meshes, so loop.</span></li> <li style="background: #f3f3f3">    <span style="background:#ffffff;color:#000000"></span><span style="background:#ffffff;color:#0000ff">foreach</span><span style="background:#ffffff;color:#000000"> (</span><span style="background:#ffffff;color:#2b91af">ModelMesh</span><span style="background:#ffffff;color:#000000"> mesh </span><span style="background:#ffffff;color:#0000ff">in</span><span style="background:#ffffff;color:#000000"> _tardis.Meshes)</span></li> <li>    <span style="background:#ffffff;color:#000000">{</span></li> <li style="background: #f3f3f3">        <span style="background:#ffffff;color:#000000"></span><span style="background:#ffffff;color:#008000">// This is where the mesh orientation is set, as well as our camera and projection.</span></li> <li>        <span style="background:#ffffff;color:#000000"></span><span style="background:#ffffff;color:#0000ff">foreach</span><span style="background:#ffffff;color:#000000"> (</span><span style="background:#ffffff;color:#2b91af">BasicEffect</span><span style="background:#ffffff;color:#000000"> effect </span><span style="background:#ffffff;color:#0000ff">in</span><span style="background:#ffffff;color:#000000"> mesh.Effects)</span></li> <li style="background: #f3f3f3">        <span style="background:#ffffff;color:#000000">{</span></li> <li>            <span style="background:#ffffff;color:#000000">effect.EnableDefaultLighting();</span></li> <li style="background: #f3f3f3">            <span style="background:#ffffff;color:#000000">effect.World = </span><span style="background:#ffffff;color:#2b91af">Matrix</span><span style="background:#ffffff;color:#000000">.CreateScale(0.1f) *</span></li> <li>                           <span style="background:#ffffff;color:#000000">(transforms[mesh.ParentBone.Index] * mesh.ParentBone.Transform *</span></li> <li style="background: #f3f3f3">                            <span style="background:#ffffff;color:#000000"></span><span style="background:#ffffff;color:#2b91af">Matrix</span><span style="background:#ffffff;color:#000000">.CreateTranslation(_modelPosition) *</span></li> <li>                            <span style="background:#ffffff;color:#000000">result.Transformation.ToXnaMatrix());</span></li> <li style="background: #f3f3f3">&nbsp;</li> <li>            <span style="background:#ffffff;color:#000000">effect.View = </span><span style="background:#ffffff;color:#2b91af">Matrix</span><span style="background:#ffffff;color:#000000">.CreateLookAt(_cameraPosition, </span><span style="background:#ffffff;color:#2b91af">Vector3</span><span style="background:#ffffff;color:#000000">.Zero, </span><span style="background:#ffffff;color:#2b91af">Vector3</span><span style="background:#ffffff;color:#000000">.Up);</span></li> <li style="background: #f3f3f3">            <span style="background:#ffffff;color:#000000">effect.Projection = </span><span style="background:#ffffff;color:#2b91af">Matrix</span><span style="background:#ffffff;color:#000000">.CreatePerspectiveFieldOfView(</span><span style="background:#ffffff;color:#2b91af">MathHelper</span><span style="background:#ffffff;color:#000000">.ToRadians(45.0f),</span></li> <li>                <span style="background:#ffffff;color:#000000">_aspectRatio, 1.0f, 10000f);</span></li> <li style="background: #f3f3f3">        <span style="background:#ffffff;color:#000000">}</span></li> <li>&nbsp;</li> <li style="background: #f3f3f3">        <span style="background:#ffffff;color:#000000"></span><span style="background:#ffffff;color:#008000">// Draw the mesh, using the effects set above.</span></li> <li>        <span style="background:#ffffff;color:#000000">mesh.Draw();</span></li> <li style="background: #f3f3f3">    <span style="background:#ffffff;color:#000000">}</span></li> <li><span style="background:#ffffff;color:#000000">}</span></li> </ol> </div> </div> </div>  <p>Since we’re using a 3D model we can’t use the spritebatch to draw it. We set some properties onto the graphicsdevice first. Then we grab all transforms that are included in the model. The Tardis model I’m using is very simple, it’s just a box, so no transformations there. </p>  <p>We loop through all the meshes in the model, for each mesh we loop through its effects and that’s where we set the position. We use the detection result his transformation matrix to calculate the world for each effect and we draw each mesh.</p>  <p>Here’s the result:</p>  <p><a href="" target="_blank"><img src="" /></a></p>  <h2>Conclusion</h2>  <p>The end result might not seem like much, but that’s because my 3D monogame skills are very lacking <img class="wlEmoticon wlEmoticon-smile" style="border-top-style: none; border-bottom-style: none; border-right-style: none; border-left-style: none" alt="Glimlach" src="" /> but just consider what we’ve done here. We’ve added a camera stream into a game, we’ve used that same stream to detect a certain pattern and we’ve positioned a game element onto that pattern. From here I’ll leave the rest to your imagination.</p>  <p>The project can be downloaded from my <a href="" target="_blank">Skydrive</a>.</p>

This is an imported post. It was imported from my old blog using an automated tool and may contain formatting errors and/or broken images.

Leave a Comment