The motivation
Ah, the nostalgia of Microsoft Encarta. You know, that encyclopedia was ahead of its time, especially with its interactive features.
In Argentina, where I grew up, every kid of the late nineties had a pirate CD copy of it in their room, and most of the homework assignments were made with it. Wikipedia came later with more quantity than quality, obviously requiring an internet connection not widely common until midst of the 2000s and never incorporated much in terms of functionality. However, Encarta was shut down after being significantly eroded by free (as in-beer) information.
Encarta was impressive because of the way it used an incipient technology, somewhat internal, called DirectDraw, which would became part of DirectX and which also powered some of the best-selling games of the era, like Command & Conquer, Warcraft and Theme Hospital.
DirectDraw was available to all developers as a DLL (ddraw.dll) but sourcecode-level access was restricted to Microsoft employees and partners under a NDA. Encarta, thus, as a Microsoft product, had internal access to DirectDraw’s implementation and debugging tools, giving their developers the edge in terms of innovations.
One of the cool things they came up with it was a 360-degree panorama viewer that let you explore touristic places as if you were there. It was mind-blowing back then, and it’s still pretty impressive today. I mean, we live in an era of thin-wrappers over OpenAi after all.
That’s what motivated me to recreate it, as I would like to recreate most of the cool features Encarta had, so I can recreate that mentality of coolness and innovation.
The project
The project that concerns us now, that is, the Panorama Viewer, comes down to this: we need to create a sphere and map a panoramic image onto it using a spherical projection.
So, let me walk you through the stepts it requires, first trough a bird’s eye perspective:
- First, we set up a Next.js project with TypeScript, as we will use them exclusively trough this blog.
- We add React Three Fiber with Drei for the 3D rendering, as Drei makes R3F a bit more approachable.
- The core of the project will be a
PanoramaViewer
. It needs to handle the loading of the panoramic image and set up a 3D scene with a spherical projection. - Optional: As most of the panoramic images that are out there are stored in TIFF format, we would need to handle at least two image formats. Encarta probably used some proprietary format, but we needed to support JPEG and TIFF. For TIFF support, we need to creat a server-side API route that uses the
sharp
library to convert TIFF images to JPEG on the fly. - Next comes the interaction part. We want to mimic the smooth panning and zooming from Encarta. So, we need to create a custom hook (
usePanoramaControls
) that will handle user interactions. - To make it responsive, a problem Encarta didn’t have, we need to make sure the viewer adapts to its container.
- Optional: Error handling and loading states are not optional per se, but let’s forego them for now.
- Finally, we wrap everything so the consumer just needs to pass it an image URI.
As most of the procediments would be virtually the same for every blog post here, we will skip the scaffolding details and focus on the few pertinent components.
The Viewer
We start with a basic React component that will serve as the container for our 3D scene:
const PanoramaViewer: React.FC<PanoramaViewerProps> = ({ imageUrl }) => {
return (
<div style={{ width: '100%', height: '100vh' }}>
{/* the 3D scene will go here */}
</div>
);
};
Next, we integrate Three.js by using R3F and Drei:
import { Canvas } from '@react-three/fiber';
import { Sphere } from '@react-three/drei';
const PanoramaViewer: React.FC<PanoramaViewerProps> = ({ imageUrl }) => {
return (
<div style={{ width: '100%', height: '100vh' }}>
<Canvas>
<Sphere args={[500, 60, 40]} scale={[-1, 1, 1]}>
{/* the material will go here */}
</Sphere>
</Canvas>
</div>
);
};
Then, we then creat a separate component to handle the spherical projection:
const Panorama: React.FC<{ imageUrl: string }> = ({ imageUrl }) => {
const texture = useTexture(imageUrl);
return (
<Sphere args={[500, 60, 40]} scale={[-1, 1, 1]}>
<meshBasicMaterial map={texture} side={THREE.BackSide} />
</Sphere>
);
};
And to allow for user interaction, we implement a CameraController:
const CameraController: React.FC<{
initialFov: number
minFov: number
maxFov: number
}> = ({ initialFov, minFov, maxFov }) => {
const { camera, gl } = useThree();
const controls = usePanoramaControls(camera as THREE.PerspectiveCamera, gl.domElement, {
initialFov,
minFov,
maxFov,
});
useFrame(() => {
controls.update()
});
return null;
};
The final PanoramaViewer will bring those elements together:
const PanoramaViewer: React.FC<PanoramaViewerProps> = ({
imageUrl,
initialFov = 75,
minFov = 30,
maxFov = 90,
}) => {
// ... state and effects go here...
return (
<div ref={containerRef} style={{ width: '100%', height: '100vh' }}>
<Canvas camera={{ fov: initialFov, near: 0.1, far: 1000 }}>
<CameraController initialFov={initialFov} minFov={minFov} maxFov={maxFov} />
<Panorama imageUrl={processedImageUrl} />
</Canvas>
</div>
);
};
The Interactions
Now, let’s break down the usePanoramaControls hook:
const usePanoramaControls = (
camera: THREE.PerspectiveCamera,
domElement: HTMLElement,
options: PanoramaControlsOptions
) => {
// logic will go here
};
We use refs
to manage the state of interactions and the position of the camera.
This will avoid unnecessary re-renders that might occur if we used useState
for frequently changing values:
const isUserInteracting = useRef(false);
const onPointerDownMouseX = useRef(0);
const onPointerDownMouseY = useRef(0);
const lon = useRef(0);
const lat = useRef(0);
const phi = useRef(0);
const theta = useRef(0);
We add event handlers for pointer, wheel, and touch events.
Here:
- The line that includes a multiplier of 0.1 acts as a sensitivity adjustment.
- The Y-axis is inverted because screen Y coordinates increase downwards, while latitude increases upwards.
- The zoom function uses a linear relationship between wheel delta changes and FOV change.
useEffect(() => {
const onPointerDown = (event: PointerEvent) => {
isUserInteracting.current = true;
onPointerDownMouseX.current = event.clientX;
onPointerDownMouseY.current = event.clientY;
onPointerDownLon.current = lon.current;
onPointerDownLat.current = lat.current;
};
const onPointerMove = (event: PointerEvent) => {
if (isUserInteracting.current) {
lon.current = (onPointerDownMouseX.current - event.clientX) * 0.1 + onPointerDownLon.current;
lat.current = (event.clientY - onPointerDownMouseY.current) * 0.1 + onPointerDownLat.current;
}
};
const onPointerUp = () => {
isUserInteracting.current = false
};
const onWheel = (event: WheelEvent) => {
const fov = camera.fov + event.deltaY * 0.05
camera.fov = THREE.MathUtils.clamp(fov, minFov, maxFov)
camera.updateProjectionMatrix()
};
// ... event listeners ...
}, [camera, domElement, minFov, maxFov]);
Finally, we creat an update function to adjust the camera based on such interactions:
const update = () => {
lat.current = Math.max(-85, Math.min(85, lat.current))
phi.current = THREE.MathUtils.degToRad(90 - lat.current)
theta.current = THREE.MathUtils.degToRad(lon.current)
const x = distance.current * Math.sin(phi.current) * Math.cos(theta.current)
const y = distance.current * Math.cos(phi.current)
const z = distance.current * Math.sin(phi.current) * Math.sin(theta.current)
camera.lookAt(x, y, z)
}
Conclusion
The result is web-based 360 panorama viewer that captures the spirit of that old Encarta feature. It’s not something that will knock your socks off, but it’s a good example of those little details that make all the difference back then.
Some people might think that it’s pretty cool to think about how far we’ve come, but I believe the opposite is true.
The code
Repository: github.com/feremabraz/encarta-panorama