Skip to content

React Native 3d Capabilities

Several years ago, React Native was an interesting and promising technology. It enabled you to write Javascript code once and run it on iOS and Android devices like a charm! And it really is when the task is pretty much routine, like creating an internet market, delivery service, or news application. Just look at the list of major users of this technology:

  • Facebook
  • Walmart
  • Bloomberg
  • Instagram
  • Soundcloud Pulse
  • Tesla
  • Wix
  • Pinterest
  • UberEATS
  • Vogue
  • etc.

But… is it so great for tasks that are not very common? Like for instance, rendering 3D models with custom textures?

Let’s start searching for appropriate libraries first, to implement core functionality – model rendering. First thing, that comes to mind is three.js. It’s very popular for web development and gives us tons of features.

For mobile devices, the library works through the `expo-three` wrapper and assumes that you are using Expo for React Native development. So let’s look at the most popular ways to develop a React Native application.

On Facebook’s GitHub for React Native we can find a Getting Started guide with advice to jump into development by installing the ‘expo-cli’ utility. It sets up a very user-friendly experience because you don’t have to install additional dependent software and can run the app in just a few minutes on your iPhone or Android device. Additional installation of the Expo application on your device is required though.

While it may seem convenient at first, Expo restricts your creativity, because not all libraries work well with Expo. Also, just watching the number of issues on Expo and Expo-cli GitHub pages may scare you off. However, in my experience with it (which admittedly was limited to Android), various random bugs made the developing process nothing close to what I am used to with other mature technologies. After some decent research I decided to skip this approach. Libs for Expo are far from perfect and we cannot change the native code. So lets dive into another options.

Facebook advises “Building Projects with Native Code” as a second option. It’s a lot more agile and still very convenient. You can debug both JavaScript and native code using Xcode or Android Studio. This approach requires you to install ‘react-native-cli’ and allows you to jump really quickly into development.

While choosing a 3D rendering library, I looked into ‘react-native-gl-model-view’. It uses a native bridge to GLView for iOS and a native bridge to jPCT-AE for Android. The demo on GitHub’s page looked very close to the result I wanted to achieve in my application so I was quite enthusiastic.

The library supports lots of formats of 3D models and textures, so you can easily find some open-source models of the cup to use. The task doesn’t seem to be so hard now, yeah? Googling an appropriate cup model, downloading, inserting in our application, and voila!

Now it’s time to make some texture for our cup. To begin I decided just to use a plain white BMP image just to test the application.

Trying to render cup with a white solid texture. Model has 9664 polygons and 4832 vertices.
Trying to render cup with a white solid texture. Model has 9664 polygons and 4832 vertices.

As you can see, the first attempt to render cup looks weird. Parts of it are incorrect, and moving the model frequently causes flickering. Still, it’s a good start.

Rendering a 3D model is pretty simple, but requires some experimenting with settings to display the model as you want.

return (
    onStartShouldSetResponder={() => true}

Variables ‘model’ and ‘texture’ are just names of files. They are stored in assets as the application is compiled. That is not exactly what we need, because textures should be replaceable, so a user can pick any picture from the gallery and place it on a cup. Unfortunately, ‘react-native-gl-model-view’ cannot do that, so we have to modify it’s native code.

React components use a native bridge to send prop data. In the native code, the class extending ViewManager or SimpleViewManager takes care of reactive props, and we should add one more prop to handle textures from the file.

I found and added a new ReactProp, next to the existing one:

@ReactProp(name = "texture")
public void setModelTexture(RNGLModelView view, @Nullable String textureFileName) {

@ReactProp(name = "texturePath")
public void setModelTextureFromPath(RNGLModelView view, @Nullable String texturePath) {

And a function to load texture from the path in

private Texture loadTextureFromPath(String textureFilePath) {
  Texture texture = null;

  try {
    File file = new File(textureFilePath.toString());
    if ( file.exists()) {
      FileInputStream fileInputStream = new FileInputStream(file);
      texture = new Texture(fileInputStream, true);
  } catch (IOException e) {
  return texture;

Depending on the set property, texture or texturePath that comes from javascript code we will set the renderer’s texture from an appropriate source.

Also, the library has two issues. They can quickly be fixed using the suggestions on the it’s GitHub page. On Android devices, models start to flicker on the first frame of animation and sometimes after moving the model. This can be fixed in with the following code amendments:

public void onDrawFrame(GL10 gl) {
  // Removes flickering on Android devices
  //if (mAnimate) {
  //  renderFrame();

Even though this fix works like a charm, the library has not been updated since May 28, 2018 and this could be a problem.

The second issue: the image renderer is using RGB, not RGBA color model, so black is considered transparent. Fixing this is easy: initiate JPCT texture (which may have one or two parameters), and allow Boolean to determine if the alpha channel should be used. In, loadTexture method we need to change one line:

texture = new Texture(textureStream, true);

Now the library becomes usable and should not have any troubles.

However, running the application still gives us a very weird looking cup with regions of texture that are filled incorrectly. So I decided to check my model in Blender for artifacts and found out that it has too many polygons. This makes the object file too large and causes rendering glitches in the areas of the most polygons concentrations. After smoothing, object size decreased from ~600KB to ~300KB. Now, it’s looking good and, most importantly, it renders in the application correctly!

Cup model before optimization.
Cup model before optimization.
Cup after polygons optimization, seamed, with created UV mesh.
Cup after polygons optimization, seamed, with created UV mesh.

Now we can save the created UV mesh as a BMP image for further usage as a texture foundation. Loading a newly created texture and model shows that the texture is applied correctly; it has no artifacts and animations are smooth.

Rendering optimized model gave a good result and now animations are smooth, no flickering anymore! New model has 3864 polygons and 1932 vertices.
Rendering optimized model gave a good result and now animations are smooth, no flickering anymore! New model has 3864 polygons and 1932 vertices.

Our next task would be loading a picture from the gallery and that is pretty much trivial. The easiest option is to use ‘react-native-image-picker’ library. It allows us to load images from a device gallery or even work with picture taken with a camera application. Simply following instructions in the documentation will get us through the process of adding a button to the load image menu from gallery/camera with one little exception: if you try to add a ‘load from gallery’ button without the option of opening the camera app, then you still need to add camera permission to the android manifest. It seems to be counter intuitive, since newer versions of android are asking for certain permissions at the time of first usage and do not require any additional adjustments to the manifest.

After loading a new picture, we should place it correctly on the blank texture image and give the result to the renderer. Searching the web for an appropriate library was difficult. The most common solution to this problem is to use javascript canvas and manipulate images in that. But this method appears to be slow and resource expensive and requires creating and rendering the image on a screen. Sure, we can hide it from the user, but it looks like a dirty workaround. After hours of searching for the best solution I found ‘react-native-image-marker’ library. This library is able to mark an image with another image or text, has settings for marker image coordinates and scale and doesn’t require creating any image on the View in the first place. It was very helpful!

Documentation on GitHub is very poor and it’s unclear what kind of resources their library uses. And, unfortunately, it is a common problem with libraries that are not mainstream. I spent plenty of time trying to figure out how to merge two images and was really close to dismissing the idea of using ‘react-native-image-marker’ entirely. It can only use a local image from the application bundle as a base picture. Any other picture, even one obtained from React Native javascript, has a mark. This creates a lot of additional work and workarounds, like using file system libraries before doing the real job of marking. However, you can use ‘rn-fetch-blob’ or ‘react-native-fs’ to deal with these issues.

Finally, I’ve got it working after days of struggling with the ways that all the libraries can interact and transfer data between each other. This was the biggest problem after all, because key libs have no unification and very weak documentation, making development painfully difficult.

3d Cup with a custom image, merged to the white predefined texture.
3d Cup with a custom image, merged to the white predefined texture.