Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

System.AccessViolationException on llama_backend_init() #1062

Open
plqplq opened this issue Jan 28, 2025 · 1 comment
Open

System.AccessViolationException on llama_backend_init() #1062

plqplq opened this issue Jan 28, 2025 · 1 comment

Comments

@plqplq
Copy link

plqplq commented Jan 28, 2025

Description

I'm running a local instance of LlamaSharp 0.20.0 with the same version of the Cpu backend, both of which are the latest in Nuget.

On my dev machine it works fine, but on a production server (same Windows Server 2022 OS, same CPU i7 12700, same memory 32gb) it breaks while initialising the Llama backend:

Image

Note that it has got past the llama_empty_call() line successfully.

The exception text is:

System.AccessViolationException: 'Attempted to read or write protected memory. This is often an indication that other memory is corrupt.'

C# code calling in is breaking on the first ModelParams reference:

            string modelPath = @"c:\llamamodels\sha256-6a0746a1ec1aef3e7ec53868f220ff6e389f6f8ef87a01d77c96807de94ca2aa";   // this came with ollama

            var parameters = new ModelParams(modelPath)
            {
                ContextSize = 1024,
                GpuLayerCount = 0
            };

Its quite hard to understand why it works on my dev machine and not on a similar production box. I made sure both are running the same version of .net 8 but this looks like its in the c++ anyway.

Here's the call stack placed in the console after the error:

Fatal error. System.AccessViolationException: Attempted to read or write protected memory. This is often an indication that other memory is corrupt.
Repeat 2 times:
--------------------------------
   at LLama.Native.NativeApi.llama_backend_init()
--------------------------------
   at LLama.Native.NativeApi..cctor()
   at LLama.Native.NativeApi.llama_max_devices()
   at LLama.Abstractions.TensorSplitsCollection..ctor()
   at LLama.Common.ModelParams..ctor(System.String)
   at LlamaTest.Program+<AI1>d__5.MoveNext()
   at System.Runtime.CompilerServices.AsyncMethodBuilderCore.Start[[System.__Canon, System.Private.CoreLib, Version=8.0.0.0, Culture=neutral, PublicKeyToken=7cec85d7bea7798e]](System.__Canon ByRef)
   at LlamaTest.Program.AI1()
   at LlamaTest.Program+<Main>d__0.MoveNext()
   at System.Runtime.CompilerServices.AsyncMethodBuilderCore.Start[[System.__Canon, System.Private.CoreLib, Version=8.0.0.0, Culture=neutral, PublicKeyToken=7cec85d7bea7798e]](System.__Canon ByRef)
   at LlamaTest.Program.Main(System.String[])
   at LlamaTest.Program.<Main>(System.String[])

and in the event log there are two separate entries:

Faulting application name: LlamaCS2.exe, version: 1.0.0.0, time stamp: 0x66960000
Faulting module name: coreclr.dll, version: 8.0.824.36612, time stamp: 0x6696b815
Exception code: 0xc0000005
Fault offset: 0x00000000001c2090
Faulting process id: 0x1964
Faulting application start time: 0x01db71af873801b1
Faulting application path: C:\xAssetsAI\TestDebug\LlamaCS2.exe
Faulting module path: C:\Program Files\dotnet\shared\Microsoft.NETCore.App\8.0.8\coreclr.dll
Report Id: b863577b-2875-4c2c-8a8f-d4a92b77c78c
Faulting package full name: 
Faulting package-relative application ID: 

and

Application: LlamaCS2.exe
CoreCLR Version: 8.0.824.36612
.NET Version: 8.0.8
Description: The process was terminated due to an unhandled exception.
Exception Info: System.AccessViolationException: Attempted to read or write protected memory. This is often an indication that other memory is corrupt.
Stack:
   at LLama.Native.NativeApi.llama_backend_init()
   at LLama.Native.NativeApi.llama_backend_init()
   at LLama.Native.NativeApi..cctor()
   at LLama.Native.NativeApi.llama_max_devices()
   at LLama.Abstractions.TensorSplitsCollection..ctor()
   at LLama.Common.ModelParams..ctor(System.String)
   at LlamaTest.Program+<AI1>d__5.MoveNext()
   at System.Runtime.CompilerServices.AsyncMethodBuilderCore.Start[[System.__Canon, System.Private.CoreLib, Version=8.0.0.0, Culture=neutral, PublicKeyToken=7cec85d7bea7798e]](System.__Canon ByRef)
   at LlamaTest.Program.AI1()
   at LlamaTest.Program+<Main>d__0.MoveNext()
   at System.Runtime.CompilerServices.AsyncMethodBuilderCore.Start[[System.__Canon, System.Private.CoreLib, Version=8.0.0.0, Culture=neutral, PublicKeyToken=7cec85d7bea7798e]](System.__Canon ByRef)
   at LlamaTest.Program.Main(System.String[])
   at LlamaTest.Program.<Main>(System.String[])

@plqplq
Copy link
Author

plqplq commented Jan 28, 2025

Further info, if I downgrade the backend to LlamaSharp.Backend.Cpu version 0.19.0, then it does work on the failing machine

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant