Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Uncaught (in promise) Error: Can't create a session #931

Open
5 tasks
Bewinxed opened this issue Sep 12, 2024 · 0 comments
Open
5 tasks

Uncaught (in promise) Error: Can't create a session #931

Bewinxed opened this issue Sep 12, 2024 · 0 comments
Labels
bug Something isn't working

Comments

@Bewinxed
Copy link

Bewinxed commented Sep 12, 2024

System Info

"@xenova/transformers": "^2.17.2",

using this on Windows, Microsoft Edge, Vite.

Environment/Platform

  • Website/web-app
  • Browser extension
  • Server-side (e.g., Node.js, Deno, Bun)
  • Desktop app (e.g., Electron)
  • Other (e.g., VSCode extension)

Description

I converted this model using the script mentioned in the docs: jinaai/reader-lm-1.5b

Then, I renamed the quantized model to decoder_model_merged_quantized.onnx

The conversion is here Bewinxed/reader-lm-1.5b-onnx

I tried remote but for speed, I put the model in /public/models

The progress updates for loading, and the RAM increases, so the model is being loaded.

But after a bit of wait, I get this in the console:

ort-wasm.js:25  Unknown Exception
lt @ ort-wasm.js:25
P @ ort-wasm.js:44
$func11504 @ ort-wasm-simd.wasm:0x82c2bc
$func2149 @ ort-wasm-simd.wasm:0x16396e
$func584 @ ort-wasm-simd.wasm:0x48a63
$func11427 @ ort-wasm-simd.wasm:0x829582
$func4164 @ ort-wasm-simd.wasm:0x339b6f
$func4160 @ ort-wasm-simd.wasm:0x339aff
j @ ort-wasm.js:56
$func356 @ ort-wasm-simd.wasm:0x2e215
j @ ort-wasm.js:56
$func339 @ ort-wasm-simd.wasm:0x28e06
$Ra @ ort-wasm-simd.wasm:0x6ebffb
e2._OrtCreateSession @ ort-wasm.js:48
e.createSessionFinalize @ wasm-core-impl.ts:53
e.createSession @ wasm-core-impl.ts:99
e.createSession @ proxy-wrapper.ts:187
loadModel @ session-handler.ts:65
await in loadModel
createSessionHandler @ backend-wasm.ts:48
create @ inference-session-impl.ts:189
await in create
constructSession @ models.js:126
wasm-core-impl.ts:55  Uncaught (in promise) Error: Can't create a session
    at e.createSessionFinalize (http://localhost:5173/node_modules/.vite/deps/@xenova_transformers.js?v=4cc7a29e:13232:119)
    at e.createSession (http://localhost:5173/node_modules/.vite/deps/@xenova_transformers.js?v=4cc7a29e:13250:46)
    at e.createSession (http://localhost:5173/node_modules/.vite/deps/@xenova_transformers.js?v=4cc7a29e:13034:17)
    at e.OnnxruntimeWebAssemblySessionHandler.loadModel (http://localhost:5173/node_modules/.vite/deps/@xenova_transformers.js?v=4cc7a29e:13110:98)
    at async Object.createSessionHandler (http://localhost:5173/node_modules/.vite/deps/@xenova_transformers.js?v=4cc7a29e:5140:20)
    at async _InferenceSession.create (http://localhost:5173/node_modules/.vite/deps/@xenova_transformers.js?v=4cc7a29e:725:25)
    at async constructSession (http://localhost:5173/node_modules/.vite/deps/@xenova_transformers.js?v=4cc7a29e:21756:12)
    at async Promise.all (index 1)
    at async Qwen2ForCausalLM.from_pretrained (http://localhost:5173/node_modules/.vite/deps/@xenova_transformers.js?v=4cc7a29e:22130:14)
    at async AutoModelForCausalLM.from_pretrained (http://localhost:5173/node_modules/.vite/deps/@xenova_transformers.js?v=4cc7a29e:24837:14)
e.createSessionFinalize @ wasm-core-impl.ts:55
e.createSession @ wasm-core-impl.ts:99
e.createSession @ proxy-wrapper.ts:187
loadModel @ session-handler.ts:65

Here is my worker script:

import { pipeline, env } from '@xenova/transformers';

env.localModelPath = "/models/";
env.allowLocalModels = true;
env.useBrowserCache = false;
env.allowRemoteModels = false;
env.backends.onnx.wasm.proxy = true;

class ReaderLM {
  static task = 'text-generation';
  static model = 'Bewinxed/reader-lm-1.5b-onnx';
  /* @type {import('@xenova/transformers').Pipeline} */
  static instance  = null;

  static async getInstance(progress_callback = null) {
    if (ReaderLM.instance === null) {
      ReaderLM.instance = pipeline(ReaderLM.task, ReaderLM.model, {
        progress_callback,
      });
    }
    return ReaderLM.instance;
  }
}

self.addEventListener('message', async (event) => {
  let readerlm = await ReaderLM.getInstance((x) => {
    self.postMessage(x);
  });

  let output = await readerlm([{
    role: 'user',
    content: event.data.text,
  }], {
    callback_function: (x) => {
      self.postMessage({
        status: 'update',
        output: readerlm.tokenizer.decode(x[0].output_token_ids, { skip_special_tokens: true }),
      });
    },
  });

  self.postMessage({
    status: 'complete',
    output: output,
  });
});

console.debug("ReaderLM worker loaded");

Reproduction

  1. Follow the documentation instructions to call the LLM.
  2. Download the model.
  3. Call the model with:
 worker.postMessage({
      text: markup,
    })
  1. Get the error above.

THANK YOU!

@Bewinxed Bewinxed added the bug Something isn't working label Sep 12, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working
Projects
None yet
Development

No branches or pull requests

1 participant