I found a probleme when Working With Ollama in Lightning : I should Install Ollama server and Models every time I reopen the studio after Sleep mode !!
hey i also have this kind of problem its because the ollama for some reason install on user local and as you know anything that is not in teamspace drive it doesnt get saved . there is one way to like counter that you can use any external database for that
I found a solution. Check out this video. https://youtu.be/H3fdfumFWew on installing Ollama in Lightning AI in a way which the Ollama installation and the models are in the persistent area.
yeah its a great solution for that and i managed to like get ollama and model persisted the solution was very simple now i just have to figure out how to save the ollama webui user crediatenls
In case the instructions in the video mentioned in the reply by @johnalexamon were not obvious to you, the slightly elaborated instructions below might help (assuming the default bash shell):
Open a terminal (bash shell) and run:
curl -fsSL https://ollama.com/install.sh > ollama_install.sh
Then, open the file ollama_install.sh to find line 69 and change it to the below:
OLLAMA_INSTALL_DIR="/teamspace/studios/this_studio/"
Save the ollama_install.sh file and exit back to the bash shell.
In the bash shell, run the commands below:
chmod +x ollama_install.sh
./ollama_install.sh
Install nano for editing:
sudo apt update
sudo apt install nano
Run the below to open ~/.bashrc with the nano editor (assuming the default bash shell):
nano ~/.bashrc
Add the line below near the end of the file:
PATH=/teamspace/studios/this_studio/bin:$PATH
export PATH="/teamspace/studios/this_studio/bin:$PATH"
Save the file and exit back to the bash shell to refresh the shell by running the command below:
source ~/.bashrc
Run the command below to start the ollama server:
ollama serve
Then open another bash shell terminal to run a model supported by ollama; the execution log will appear in the original terminal. For example, open a new terminal and run the below:
ollama run llama3.2
hey thanks for the more elaborated instructions the vim was having some problems so i came here to post about nano but thanks.