> This article is about analyzing Figma files using a library called kiwi, written by Figma’s former CTO evan wallace.
> Currently, Figma does not officially use this library.
> > But it was very helpful in understanding the structure of Figma files.
>
> The various functions and logic that are covered were analyzed and reconstructed from the source at https://madebyevan.com/figma/fig-file-parser/.
Are .fig files a mystery?
Have you ever opened a .fig file? Most likely, the answer is no.
Unlike traditional design tools, Figma has adopted a cloud-based, simultaneous editing system that has completely changed the concept of “saving a design file” as we’re used to it.
As a result, .fig files are unlike anything we’ve ever known. Since we can no longer have the concept of saving files as original data, how do we manage our data?
There are many places that still require a backup file as a storage concept, and you can’t use it as a backup at all.
And there’s another problem: we can’t open the file because the format of the .fig is not publicly available. If we import the .fig, it will be completely different data because it will be in a new figma file.
So why can’t we open this file directly, and what information is in it?
File Structure in Figma
Figma files are stored in binary form, so we can’t open them directly. However, understanding the structure of Figma files can help us understand Figma better.
Let’s analyze the file one by one in the order below.
- Analyzing the file header
- Extracting the file
- Decompressing the file
Analyzing the file header
FIGMA files are stored in a compressed binary format.
After loading the file, you can check the first two bytes of the file to see the format as shown below.
Verify that the header is PK.
const zipHeader = String.fromCharCode(...arrayBuffer.slice(0, 2));
let fileEntries = [];
if (zipHeader === 'PK') {
const zipReader = new zip.ZipReader(new zip.Uint8ArrayReader(new Uint8Array(arrayBuffer)));
const entries = await zipReader.getEntries();
fileEntries = entries.filter((entry) => !entry.directory);
} else {
fileEntries = [{ filename: file.name, getData: async () => new Uint8Array(arrayBuffer) }];
}
PK means that it is compressed into a ZIP file. Use the @zip.js/zip.js library to decompress it.
It’s uncompressed in a zip, so it just unzips when I unzip it.
Archive: scripts/sample-design.fig
extracting: canvas.fig
extracting: thumbnail.png
inflating: meta.json
creating: images/
extracting: images/072244895a84d42d6830153fdd427054c3b92e5b
extracting: images/ad81c993cfd7c4df28dc7c20635334de68fb5249
extracting: images/5fa7cc56ad2ffaeb1e88356d208bec30e6f7ecf5
extracting: images/e7b5d4319b7f7037dc01f10637a389730aee63c5
extracting: images/3bc7a75365abd25dbd57e724280b40f26f1483fb
Extract files
We have the entries for each file in the zip file, now what we need to do is extract the data from each entry.
There are three types of entries inside a Figma file.
- Design Data
- Meta Data
- Graphics Data
Design Data
You can think of it as saving a typical design file.
Image files such as PNG, JPEG, and GIF are also included in the design data.
Image files are also stored in binary form, so we check the signature at the beginning to see if it is an image file or not.
const isPNG = String.fromCharCode(...bytes.slice(0, 8)) === String.fromCharCode(137, 80, 78, 71, 13, 10, 26, 10);
const isJPEG = String.fromCharCode(...bytes.slice(0, 2)) === String.fromCharCode(255, 216);
const isGIF = ['GIF87a', 'GIF89a'].includes(String.fromCharCode(...bytes.slice(0, 6)));
if (isPNG || isJPEG || isGIF) {
const imageType = isPNG ? 'PNG' : isJPEG ? 'JPEG' : 'GIF';
// Add image preview logic
return;
}
Meta Data
The .json file contains metadata.
if (filename.endsWith('.json')) {
const decoder = new TextDecoder('utf-8');
const jsonString = decoder.decode(bytes);
const json = JSON.parse(jsonString);
console.log('JSON file content:', json);
return;
}
It looks something like this
{
"client_meta": {
"background_color": {
"r": 0.9607843160629272,
"g": 0.9607843160629272,
"b": 0.9607843160629272,
"a": 1
},
"thumbnail_size": { "width": 400, "height": 154 },
"render_coordinates": { "x": -443, "y": -195, "width": 3161, "height": 1216 }
},
"file_name": "sample-design",
"developer_related_links": []
}
Curiously, there is an entry called client_meta.
Graphical Data (Figma Data Structure)
The header of a Figma file is checked to determine the file type and process it. Figma files have a unique signature, which can be used to identify the file type.
const figHeader = String.fromCharCode(...bytes.slice(0, 8));
if (figHeader === 'fig-kiwi' || figHeader === 'fig-jam.' || figHeader === 'fig-deck') {
const fileType = figHeader === 'fig-jam.' ? 'FigJam' : figHeader === 'fig-deck' ? 'FigDeck' : 'Figma';
console.log(`${fileType} file exists`);
return await decodeFile(bytes);
}
Determine the file type by checking the first 8 bytes of the file.
- fig-kiwi: Figma
- fig-jam.: FigJam board
- fig-deck: Slide Deck
I think there are about 3 types of files right now (because there are 3 types of services), but there could be more in the future.
The first file header is unusual: it says kiwi.
kiwi is a library called https://github.com/evanw/kiwi by Evan Wallace, who was Figma CTO until 2021. He also wrote esbuild.
kiwi was inspired by Google’s Protocol Buffer. It was created to represent and manage data types in binary form.
So figma internally uses kiwi to store files.
But that’s just the story so far, there’s nothing official about how this works, so it could change at any time.
Decoding files
Reading the first 8 bytes can be seen as decoding the file. Let’s continue to see what happens next.
- File Header
const bytes = arrayBuffer;
const header = String.fromCharCode(...bytes.slice(0, 8));
console.log('헤더:', header);
Suppose fig-kiwi came out.
- Extract version information
Up to the header can be read as string data, but by default binary is more like a uint8array, so we use a DataView to get the correct data type.
const view = new DataView(arrayBuffer.buffer);
const version = view.getUint32(8, true);
console.log('버전:', version);
We get the version by reading a Uint32 (4 bytes) starting at index 8.
In the last file I saved, it’s about version: 70, so I don’t know what it is, but it’s a significant version upgrade.
- Extracting graphic data
Let’s read the data after version.
Note that in the Figma file it is stored as little endian.
const view = new DataView(arrayBuffer); const chunkLength = view.getUint32(12, true);
So we’re going to put true in every function to read it.
Here’s the rough structure
[header:8][version:4][schema_chunk_length:4][schema_chunk:schema_chunk_length][data_chunk_length:4][data_chunk:data_chunk_length]
This is what it looks like, so we need to get a total of 4 pieces of data from the binary.
[schema_chunk_length:4][schema_chunk:schema_chunk_length][data_chunk_length:4][data_chunk:data_chunk_length]
Data chunk extraction
Let’s find the chunks.
const chunks = [];
let offset = 12;
while (offset < bytes.length) {
const chunkLength = view.getUint32(offset, true);
offset += 4;
chunks.push(bytes.slice(offset, offset + chunkLength));
offset += chunkLength;
}
if (chunks.length < 2) throw new Error('Chunk is not enough');
- The offset starts from 12, which is 8 (header) + 4 (version).
- First, read the length of each chunk, then slice the data for that length and add it to the chunks array.
- Store the extracted chunks in an array.
In this way, we can obtain a total of two chunks.
- chunks[0] : schema chunk
- chunks[1] : data chunk
The schema chunk stores the kiwi schema that represents Figma’s data structure. Similarly, the data chunk contains the kiwi data.
These two are combined to create the actual data.
Decompression
The chunks we have obtained so far are all in a compressed state. Therefore, we need to decompress them.
const uncompressChunk = (chunkBytes) => {
try {
return pako.inflateRaw(chunkBytes);
} catch (err) {
try {
// zstd decompression
return decoder.decode(chunkBytes);
} catch {
throw err;
}
}
};
const encodedSchema = uncompressChunk(chunks[0]);
const encodedData = uncompressChunk(chunks[1]);
We create a simple function to handle this process. Looking at the code, it attempts two decompression methods:
- pako.inflateRaw: DEFLATE decompression
- decoder.decode: ZSTD decompression
DEFLATE decompression is a widely used standard compression algorithm. ZSTD decompression is an algorithm with fast compression speed and high compression ratio.
ZStandard is an algorithm with fast compression speed and high compression ratio. It was created by Facebook. https://github.com/facebook/zstd
Schema and Data Decoding
We decode the schema and data obtained earlier.
const schema = kiwi.compileSchema(kiwi.decodeBinarySchema(encodedSchema));
const { nodeChanges, blobs } = schema.decodeMessage(encodedData);
console.log('nodeChanges', nodeChanges[0]);
console.log('schema', schema);
When the final result is decoded, the following data is produced:
type
nodeChanges ← Change tree
blobs ← Data such as command, vectorNetwork, etc.
sessionID
ackID
pasteID
pasteFileKey
pasteIsPartiallyOutsideEnclosingFrame
pastePageId isCut pasteEditorType
publishedAssetGuids
clipboardSelectionRegions
As you can see up to this point, this is not a typical document structure. We need to transform it into a document structure that can be viewed through nodeChanges.
Figma’s document structure is basically a tree, but nodeChanges is an array. A process is needed to convert this array into a tree.
nodeChanges Converter
const nodes = new Map();
const orderByPosition = ({ parentIndex: { position: a } }, { parentIndex: { position: b } }) => {
return (a < b) - (a > b);
};
for (const node of nodeChanges) {
const { sessionID, localID } = node.guid;
// The unique ID of a Figma node consists of sessionID and localID.
// We can find nodes using this ID.
nodes.set(`${sessionID}:${localID}`, node);
}
// Set the parent relationship of the nodes.
// The parent relationship of a node can be determined through parentIndex.
for (const node of nodeChanges) {
if (node.parentIndex) {
const { sessionID, localID } = node.parentIndex.guid;
const parent = nodes.get(`${sessionID}:${localID}`);
if (parent) {
parent.children ||= [];
parent.children.push(node);
}
}
}
// Sort the child nodes of the node.
// Isn't it interesting to sort the children of a node?
for (const node of nodeChanges) {
if (node.children) {
node.children.sort(orderByPosition);
}
}
// We delete the parentIndex because the child relationship setup is complete.
for (const node of nodeChanges) {
delete node.parentIndex;
}
Figma supports a multiplayer environment, so it’s sensitive to changes. Therefore, there’s a need for quick property transformations. While sorting the children of nodes like this might seem inefficient, it’s actually quite convenient because when adding or deleting a child, you only need to insert the position. This makes it easy to use in practice.
node.get('0:0'); // root node
Now we can find nodes using the specified IDs in the Map.
0:0
represents the root node.
Process blobs
Since Figma is a vector-based tool, it needs a method to store vector data. In particular, it manages drawing paths using a complex algorithm called vectorNetwork, which results in a large amount of stored content.
Let’s now process the blobs.
There are two types of blobs:
- commands
- vectorNetwork
Commands can be viewed as similar to SVG Path data. VectorNetwork is Figma’s unique path expression data.
When traversing from the root node, if we encounter two properties (commandsBlob and vectorNetworkBlob), we process them as follows:
The xxxxBlob property holds the index of the blobs.
function parseBlob(key, { bytes }) {
const view = new DataView(bytes.buffer);
let offset = 0;
switch (key) {
case 'commands':
const path = [];
while (offset < bytes.length) {
switch (bytes[offset++]) {
case 0:
path.push('Z');
break;
case 1:
if (offset + 8 > bytes.length) return;
path.push('M', view.getFloat32(offset, true), view.getFloat32(offset + 4, true));
offset += 8;
break;
case 2:
if (offset + 8 > bytes.length) return;
path.push('L', view.getFloat32(offset, true), view.getFloat32(offset + 4, true));
offset += 8;
break;
case 3:
if (offset + 16 > bytes.length) return;
path.push(
'Q',
view.getFloat32(offset, true),
view.getFloat32(offset + 4, true),
view.getFloat32(offset + 8, true),
view.getFloat32(offset + 12, true),
);
offset += 16;
break;
case 4:
if (offset + 24 > bytes.length) return;
path.push(
'C',
view.getFloat32(offset, true),
view.getFloat32(offset + 4, true),
view.getFloat32(offset + 8, true),
view.getFloat32(offset + 12, true),
view.getFloat32(offset + 16, true),
view.getFloat32(offset + 20, true),
);
offset += 24;
break;
default:
return;
}
}
return path;
break;
case 'vectorNetwork':
const vertexCount = view.getUint32(0, true);
const segmentCount = view.getUint32(4, true);
const regionCount = view.getUint32(8, true);
const vertices = [];
const segments = [];
const regions = [];
for (let i = 0; i < vertexCount; i++) {
if (offset + 8 > bytes.length) break;
vertices.push({
styleID: view.getUint32(offset + 0, true),
x: view.getFloat32(offset + 4, true),
y: view.getFloat32(offset + 8, true),
});
offset += 12;
}
for (let i = 0; i < segmentCount; i++) {
if (offset + 28 > bytes.length) break;
const startVertex = view.getUint32(offset + 4, true);
const endVertex = view.getUint32(offset + 16, true);
if (startVertex >= vertexCount || endVertex >= vertexCount) continue;
segments.push({
styleID: view.getUint32(offset + 0, true),
start: {
vertex: startVertex,
dx: view.getFloat32(offset + 8, true),
dy: view.getFloat32(offset + 12, true),
},
end: {
vertex: endVertex,
dx: view.getFloat32(offset + 20, true),
dy: view.getFloat32(offset + 24, true),
},
});
offset += 28;
}
for (let i = 0; i < regionCount; i++) {
if (offset + 8 > bytes.length) break;
let styleID = view.getUint32(offset, true);
const windingRule = styleID & 1 ? 'NONZERO' : 'ODD';
styleID >>= 1;
const loopCount = view.getUint32(offset + 4, true);
const loops = [];
offset += 8;
for (let i = 0; i < loopCount; i++) {
if (offset + 4 > bytes.length) break;
const indexCount = view.getUint32(offset, true);
const indices = [];
offset += 4;
if (offset + indexCount * 4 > bytes.length) return;
for (let k = 0; k < indexCount; k++) {
const segment = view.getUint32(offset, true);
if (segment >= segmentCount) return;
indices.push(segment);
offset += 4;
}
loops.push({
segments: indices,
windingRule,
});
}
regions.push({
styleID,
windingRule,
loops,
});
}
return { vertices, segments, regions };
}
return { type, length, data };
}
Final decompressed data structure
{
"guid": {
"sessionID": 0,
"localID": 0
},
"phase": "CREATED",
"type": "DOCUMENT",
"name": "Document",
"visible": true,
"opacity": 1,
"transform": {
"m00": 1,
"m01": 0,
"m02": 0,
"m10": 0,
"m11": 1,
"m12": 0
},
"strokeWeight": 0,
"strokeAlign": "CENTER",
"strokeJoin": "BEVEL",
"documentColorProfile": "SRGB",
"children": [
{
"guid": {
"sessionID": 0,
"localID": 2
},
"phase": "CREATED",
"type": "CANVAS",
"name": "Internal Only Canvas",
"visible": false,
"opacity": 1,
"transform": {
"m00": 1,
"m01": 0,
"m02": 0,
"m10": 0,
"m11": 1,
"m12": 0
},
"strokeWeight": 0,
"strokeAlign": "CENTER",
"strokeJoin": "BEVEL",
"internalOnly": true,
"children": [
{
"guid": {
"sessionID": 209,
"localID": 145
},
"phase": "CREATED",
"type": "VARIABLE",
"name": "FrameType",
"isPublishable": true,
"version": "209:28",
"userFacingVersion": "209:28",
"sortPosition": "~~~?",
"variableSetID": {
"guid": {
"sessionID": 1,
"localID": 2
}
},
"variableResolvedType": "STRING",
"variableDataValues": {
"entries": [
{
"modeID": {
"sessionID": 1,
"localID": 0
},
"variableData": {
"value": {
"textValue": "Default"
},
"dataType": "STRING",
"resolvedDataType": "STRING"
}
},
{
"modeID": {
"sessionID": 209,
"localID": 0
},
"variableData": {
"value": {
"textValue": "hover"
},
"dataType": "STRING",
"resolvedDataType": "STRING"
}
},
{
"modeID": {
"sessionID": 209,
"localID": 1
},
"variableData": {
"value": {
"textValue": "pressed"
},
"dataType": "STRING",
"resolvedDataType": "STRING"
}
}
]
},
"isSoftDeleted": false
},
{
"guid": {
"sessionID": 209,
"localID": 144
},
"phase": "CREATED",
"type": "VARIABLE",
"name": "smaple-text",
"isPublishable": true,
"version": "209:31",
"userFacingVersion": "209:31",
"sortPosition": "~~~>",
"variableSetID": {
"guid": {
"sessionID": 1,
"localID": 2
}
},
"variableResolvedType": "STRING",
"variableDataValues": {
"entries": [
{
"modeID": {
"sessionID": 1,
"localID": 0
},
"variableData": {
"value": {
"textValue": "99999"
},
"dataType": "STRING",
"resolvedDataType": "STRING"
}
},
{
"modeID": {
"sessionID": 209,
"localID": 0
},
"variableData": {
"value": {
"textValue": "678678"
},
"dataType": "STRING",
"resolvedDataType": "STRING"
}
},
{
"modeID": {
"sessionID": 209,
"localID": 1
},
"variableData": {
"value": {
"textValue": "uiouio8678"
},
"dataType": "STRING",
"resolvedDataType": "STRING"
}
}
]
},
"isSoftDeleted": false
},
...
}
}
As there is too much content, it’s impossible to view it all. The data structure doesn’t exactly match the structure seen in the Figma plugin.
Conclusion
We’ve broadly examined how to view .fig file data. It seems we’ve expanded our understanding of Figma’s structure a bit.
Although we still need to make inferences due to the lack of official documentation, next time we’ll look at how the schema structure and data structure are formed.
Until the day we fully understand Figma — where the data storage structure, the structure held in memory, and the actual data structure (available via REST API) are all different — we’ll need to keep digging and exploring.
Here’s the complete source code of what we’ve created so far. This code is designed to run in a Node.js environment. (It can be run in a similar way on the web as well.)
import { ZSTDDecoder } from 'zstddec';
import fs from 'fs';
import pako from 'pako';
import * as zip from '@zip.js/zip.js';
import * as kiwi from 'kiwi-schema';
const decoder = new ZSTDDecoder();
await decoder.init();
const uncompressChunk = (chunkBytes) => {
try {
return pako.inflateRaw(chunkBytes);
} catch (err) {
try {
return decoder.decode(chunkBytes);
} catch {
throw err;
}
}
};
async function decodeFile(arrayBuffer) {
const bytes = arrayBuffer;
const header = String.fromCharCode(...bytes.slice(0, 8));
console.log('헤더:', header);
const view = new DataView(arrayBuffer.buffer);
const version = view.getUint32(8, true);
console.log('버전:', version);
const chunks = [];
let offset = 12;
while (offset < bytes.length) {
const chunkLength = view.getUint32(offset, true);
offset += 4;
chunks.push(bytes.slice(offset, offset + chunkLength));
offset += chunkLength;
}
if (chunks.length < 2) throw new Error('Chunk is not enough');
const encodedSchema = uncompressChunk(chunks[0]);
const encodedData = uncompressChunk(chunks[1]);
const decodedSchema = kiwi.decodeBinarySchema(encodedSchema);
console.log(decodedSchema.definitions[2], encodedData);
const schema = kiwi.compileSchema(decodedSchema);
const { nodeChanges, blobs } = schema.decodeMessage(encodedData);
const nodes = new Map();
const orderByPosition = ({ parentIndex: { position: a } }, { parentIndex: { position: b } }) => {
return (a < b) - (a > b);
};
for (const node of nodeChanges) {
const { sessionID, localID } = node.guid;
nodes.set(`${sessionID}:${localID}`, node);
}
for (const node of nodeChanges) {
if (node.parentIndex) {
const { sessionID, localID } = node.parentIndex.guid;
const parent = nodes.get(`${sessionID}:${localID}`);
if (parent) {
parent.children ||= [];
parent.children.push(node);
}
}
}
for (const node of nodeChanges) {
if (node.children) {
node.children.sort(orderByPosition);
}
}
for (const node of nodeChanges) {
delete node.parentIndex;
}
console.log('schema:', encodedSchema);
console.log('data:', encodedData);
return { encodedSchema, encodedData, nodes, nodeChanges, version, root: nodes.get('0:0'), blobs };
}
function parseBlob(key, { bytes }) {
const view = new DataView(bytes.buffer);
let offset = 0;
switch (key) {
case 'commands':
const path = [];
while (offset < bytes.length) {
switch (bytes[offset++]) {
case 0:
path.push('Z');
break;
case 1:
if (offset + 8 > bytes.length) return;
path.push('M', view.getFloat32(offset, true), view.getFloat32(offset + 4, true));
offset += 8;
break;
case 2:
if (offset + 8 > bytes.length) return;
path.push('L', view.getFloat32(offset, true), view.getFloat32(offset + 4, true));
offset += 8;
break;
case 3:
if (offset + 16 > bytes.length) return;
path.push(
'Q',
view.getFloat32(offset, true),
view.getFloat32(offset + 4, true),
view.getFloat32(offset + 8, true),
view.getFloat32(offset + 12, true),
);
offset += 16;
break;
case 4:
if (offset + 24 > bytes.length) return;
path.push(
'C',
view.getFloat32(offset, true),
view.getFloat32(offset + 4, true),
view.getFloat32(offset + 8, true),
view.getFloat32(offset + 12, true),
view.getFloat32(offset + 16, true),
view.getFloat32(offset + 20, true),
);
offset += 24;
break;
default:
return;
}
}
return path;
break;
case 'vectorNetwork':
const vertexCount = view.getUint32(0, true);
const segmentCount = view.getUint32(4, true);
const regionCount = view.getUint32(8, true);
const vertices = [];
const segments = [];
const regions = [];
for (let i = 0; i < vertexCount; i++) {
if (offset + 8 > bytes.length) break;
vertices.push({
styleID: view.getUint32(offset + 0, true),
x: view.getFloat32(offset + 4, true),
y: view.getFloat32(offset + 8, true),
});
offset += 12;
}
for (let i = 0; i < segmentCount; i++) {
if (offset + 28 > bytes.length) break;
const startVertex = view.getUint32(offset + 4, true);
const endVertex = view.getUint32(offset + 16, true);
if (startVertex >= vertexCount || endVertex >= vertexCount) continue;
segments.push({
styleID: view.getUint32(offset + 0, true),
start: {
vertex: startVertex,
dx: view.getFloat32(offset + 8, true),
dy: view.getFloat32(offset + 12, true),
},
end: {
vertex: endVertex,
dx: view.getFloat32(offset + 20, true),
dy: view.getFloat32(offset + 24, true),
},
});
offset += 28;
}
for (let i = 0; i < regionCount; i++) {
if (offset + 8 > bytes.length) break;
let styleID = view.getUint32(offset, true);
const windingRule = styleID & 1 ? 'NONZERO' : 'ODD';
styleID >>= 1;
const loopCount = view.getUint32(offset + 4, true);
const loops = [];
offset += 8;
for (let i = 0; i < loopCount; i++) {
if (offset + 4 > bytes.length) break;
const indexCount = view.getUint32(offset, true);
const indices = [];
offset += 4;
if (offset + indexCount * 4 > bytes.length) return;
for (let k = 0; k < indexCount; k++) {
const segment = view.getUint32(offset, true);
if (segment >= segmentCount) return;
indices.push(segment);
offset += 4;
}
loops.push({
segments: indices,
windingRule,
});
}
regions.push({
styleID,
windingRule,
loops,
});
}
return { vertices, segments, regions };
}
return { type, length, data };
}
function recoverProperty(prop, blobs) {
if (Array.isArray(prop)) {
return prop.map((item) => recoverProperty(item, blobs));
} else if (typeof prop === 'object' && prop !== null) {
return recoverObject(prop, blobs);
}
return prop;
}
function recoverObject(obj, blobs) {
for (const [key, value] of Object.entries(obj)) {
if (key.endsWith('Blob')) {
const blobKey = key.slice(0, -4);
const blobData = blobs[value];
if (blobData) {
obj[blobKey] = parseBlob(blobKey, blobData);
delete obj[key];
}
} else {
obj[key] = recoverProperty(value, blobs);
}
}
return obj;
}
function recoverNode(node, blobs) {
recoverObject(node, blobs);
if (node.children) {
node.children = node.children.map((child) => recoverNode(child, blobs));
}
return node;
}
async function showPreview(filename, bytes, originalFilePrefix) {
const isPNG = String.fromCharCode(...bytes.slice(0, 8)) === String.fromCharCode(137, 80, 78, 71, 13, 10, 26, 10);
const isJPEG = String.fromCharCode(...bytes.slice(0, 2)) === String.fromCharCode(255, 216);
const isGIF = ['GIF87a', 'GIF89a'].includes(String.fromCharCode(...bytes.slice(0, 6)));
if (isPNG || isJPEG || isGIF) {
const imageType = isPNG ? 'PNG' : isJPEG ? 'JPEG' : 'GIF';
// add preview image
return;
}
// if file is json
if (filename.endsWith('.json')) {
const json = JSON.parse(new TextDecoder().decode(bytes));
console.log('JSON is checked:', json);
// json file logic
return;
}
// if file is figma file
const figHeader = String.fromCharCode(...bytes.slice(0, 8));
console.log('figHeader', figHeader);
if (figHeader === 'fig-kiwi' || figHeader === 'fig-jam.' || figHeader === 'fig-deck') {
const fileType = figHeader === 'fig-jam.' ? 'FigJam' : figHeader === 'fig-deck' ? 'FigDeck' : 'Figma';
console.log(`${fileType} is exists`);
const { encodedSchema, encodedData, version, root, nodeChanges, blobs } = await decodeFile(bytes);
console.log('version', version);
const newRoot = recoverNode(root, blobs);
return newRoot;
}
}
async function startLoading(files) {
try {
const file = files[0];
const originalFilePrefix = file.name.replace(/\.fig$/, '');
const arrayBuffer = file.arrayBuffer;
const zipHeader = String.fromCharCode(...arrayBuffer.slice(0, 2));
let fileEntries = [];
if (zipHeader === 'PK') {
const zipReader = new zip.ZipReader(new zip.Uint8ArrayReader(new Uint8Array(arrayBuffer)));
const entries = await zipReader.getEntries();
fileEntries = entries.filter((entry) => !entry.directory);
} else {
fileEntries = [{ filename: file.name, getData: async () => new Uint8Array(arrayBuffer) }];
}
console.log(
'file entries:',
fileEntries.map((entry) => entry.filename),
);
if (fileEntries.length > 0) {
const firstEntry = fileEntries[0];
fileEntries.forEach(async (entry) => {
const bytes = await entry.getData(new zip.Uint8ArrayWriter());
const node = await showPreview(entry.filename, bytes, originalFilePrefix);
entry.node = node;
});
}
console.log('decoding file is complete:', originalFilePrefix);
} catch (err) {
console.error('error:', err);
}
}
class File {
constructor(arrayBuffer, name) {
this.arrayBuffer = arrayBuffer;
this.name = name;
}
}
const testFile = new File(await fs.promises.readFile('./scripts/sample-design.fig'), 'sample-design.fig');
await startLoading([testFile]);