Instructions to use SciPhi/Triplex with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Transformers
How to use SciPhi/Triplex with Transformers:
# Use a pipeline as a high-level helper from transformers import pipeline pipe = pipeline("text-generation", model="SciPhi/Triplex", trust_remote_code=True) messages = [ {"role": "user", "content": "Who are you?"}, ] pipe(messages)# Load model directly from transformers import AutoTokenizer, AutoModelForCausalLM tokenizer = AutoTokenizer.from_pretrained("SciPhi/Triplex", trust_remote_code=True) model = AutoModelForCausalLM.from_pretrained("SciPhi/Triplex", trust_remote_code=True) messages = [ {"role": "user", "content": "Who are you?"}, ] inputs = tokenizer.apply_chat_template( messages, add_generation_prompt=True, tokenize=True, return_dict=True, return_tensors="pt", ).to(model.device) outputs = model.generate(**inputs, max_new_tokens=40) print(tokenizer.decode(outputs[0][inputs["input_ids"].shape[-1]:])) - llama-cpp-python
How to use SciPhi/Triplex with llama-cpp-python:
# !pip install llama-cpp-python from llama_cpp import Llama llm = Llama.from_pretrained( repo_id="SciPhi/Triplex", filename="quantized_model-Q4_K_M.gguf", )
llm.create_chat_completion( messages = [ { "role": "user", "content": "What is the capital of France?" } ] ) - Notebooks
- Google Colab
- Kaggle
- Local Apps
- llama.cpp
How to use SciPhi/Triplex with llama.cpp:
Install from brew
brew install llama.cpp # Start a local OpenAI-compatible server with a web UI: llama-server -hf SciPhi/Triplex:Q4_K_M # Run inference directly in the terminal: llama-cli -hf SciPhi/Triplex:Q4_K_M
Install from WinGet (Windows)
winget install llama.cpp # Start a local OpenAI-compatible server with a web UI: llama-server -hf SciPhi/Triplex:Q4_K_M # Run inference directly in the terminal: llama-cli -hf SciPhi/Triplex:Q4_K_M
Use pre-built binary
# Download pre-built binary from: # https://github.com/ggerganov/llama.cpp/releases # Start a local OpenAI-compatible server with a web UI: ./llama-server -hf SciPhi/Triplex:Q4_K_M # Run inference directly in the terminal: ./llama-cli -hf SciPhi/Triplex:Q4_K_M
Build from source code
git clone https://github.com/ggerganov/llama.cpp.git cd llama.cpp cmake -B build cmake --build build -j --target llama-server llama-cli # Start a local OpenAI-compatible server with a web UI: ./build/bin/llama-server -hf SciPhi/Triplex:Q4_K_M # Run inference directly in the terminal: ./build/bin/llama-cli -hf SciPhi/Triplex:Q4_K_M
Use Docker
docker model run hf.co/SciPhi/Triplex:Q4_K_M
- LM Studio
- Jan
- vLLM
How to use SciPhi/Triplex with vLLM:
Install from pip and serve model
# Install vLLM from pip: pip install vllm # Start the vLLM server: vllm serve "SciPhi/Triplex" # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:8000/v1/chat/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "SciPhi/Triplex", "messages": [ { "role": "user", "content": "What is the capital of France?" } ] }'Use Docker
docker model run hf.co/SciPhi/Triplex:Q4_K_M
- SGLang
How to use SciPhi/Triplex with SGLang:
Install from pip and serve model
# Install SGLang from pip: pip install sglang # Start the SGLang server: python3 -m sglang.launch_server \ --model-path "SciPhi/Triplex" \ --host 0.0.0.0 \ --port 30000 # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:30000/v1/chat/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "SciPhi/Triplex", "messages": [ { "role": "user", "content": "What is the capital of France?" } ] }'Use Docker images
docker run --gpus all \ --shm-size 32g \ -p 30000:30000 \ -v ~/.cache/huggingface:/root/.cache/huggingface \ --env "HF_TOKEN=<secret>" \ --ipc=host \ lmsysorg/sglang:latest \ python3 -m sglang.launch_server \ --model-path "SciPhi/Triplex" \ --host 0.0.0.0 \ --port 30000 # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:30000/v1/chat/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "SciPhi/Triplex", "messages": [ { "role": "user", "content": "What is the capital of France?" } ] }' - Ollama
How to use SciPhi/Triplex with Ollama:
ollama run hf.co/SciPhi/Triplex:Q4_K_M
- Unsloth Studio new
How to use SciPhi/Triplex with Unsloth Studio:
Install Unsloth Studio (macOS, Linux, WSL)
curl -fsSL https://unsloth.ai/install.sh | sh # Run unsloth studio unsloth studio -H 0.0.0.0 -p 8888 # Then open http://localhost:8888 in your browser # Search for SciPhi/Triplex to start chatting
Install Unsloth Studio (Windows)
irm https://unsloth.ai/install.ps1 | iex # Run unsloth studio unsloth studio -H 0.0.0.0 -p 8888 # Then open http://localhost:8888 in your browser # Search for SciPhi/Triplex to start chatting
Using HuggingFace Spaces for Unsloth
# No setup required # Open https://huggingface.co/spaces/unsloth/studio in your browser # Search for SciPhi/Triplex to start chatting
- Docker Model Runner
How to use SciPhi/Triplex with Docker Model Runner:
docker model run hf.co/SciPhi/Triplex:Q4_K_M
- Lemonade
How to use SciPhi/Triplex with Lemonade:
Pull the model
# Download Lemonade from https://lemonade-server.ai/ lemonade pull SciPhi/Triplex:Q4_K_M
Run and chat with the model
lemonade run user.Triplex-Q4_K_M
List all available models
lemonade list
Triplex: a SOTA LLM for knowledge graph construction.
Knowledge graphs, like Microsoft's Graph RAG, enhance RAG methods but are expensive to build. Triplex offers a 98% cost reduction for knowledge graph creation, outperforming GPT-4 at 1/60th the cost and enabling local graph building with SciPhi's R2R.
Triplex is a finetuned version of Phi3-3.8B for creating knowledge graphs from unstructured data developed by SciPhi.AI. It works by extracting triplets - simple statements consisting of a subject, predicate, and object - from text or other data sources.
Benchmark
Usage:
- Blog: https://www.sciphi.ai/blog/triplex
- Demo: kg.sciphi.ai
- Cookbook: https://r2r-docs.sciphi.ai/cookbooks/knowledge-graph
- Python:
import json
from transformers import AutoModelForCausalLM, AutoTokenizer
def triplextract(model, tokenizer, text, entity_types, predicates):
input_format = """Perform Named Entity Recognition (NER) and extract knowledge graph triplets from the text. NER identifies named entities of given entity types, and triple extraction identifies relationships between entities using specified predicates.
**Entity Types:**
{entity_types}
**Predicates:**
{predicates}
**Text:**
{text}
"""
message = input_format.format(
entity_types = json.dumps({"entity_types": entity_types}),
predicates = json.dumps({"predicates": predicates}),
text = text)
messages = [{'role': 'user', 'content': message}]
input_ids = tokenizer.apply_chat_template(messages, add_generation_prompt = True, return_tensors="pt").to("cuda")
output = tokenizer.decode(model.generate(input_ids=input_ids, max_length=2048)[0], skip_special_tokens=True)
return output
model = AutoModelForCausalLM.from_pretrained("sciphi/triplex", trust_remote_code=True).to('cuda').eval()
tokenizer = AutoTokenizer.from_pretrained("sciphi/triplex", trust_remote_code=True)
entity_types = [ "LOCATION", "POSITION", "DATE", "CITY", "COUNTRY", "NUMBER" ]
predicates = [ "POPULATION", "AREA" ]
text = """
San Francisco,[24] officially the City and County of San Francisco, is a commercial, financial, and cultural center in Northern California.
With a population of 808,437 residents as of 2022, San Francisco is the fourth most populous city in the U.S. state of California behind Los Angeles, San Diego, and San Jose.
"""
prediction = triplextract(model, tokenizer, text, entity_types, predicates)
print(prediction)
Commercial usage
We want Triplex to be as widely accessible as possible, but we also need to keep commercial concerns in mind as we are still an early stage organization. Research and personal usage is fine, but we are placing some restrictions on commercial usage.
The weights for the models are licensed cc-by-nc-sa-4.0, but we will waive them for any organization with under $5M USD in gross revenue in the most recent 12-month period. If you want to remove the GPL license requirements (dual-license) and/or use the weights commercially over the revenue limit, please reach out to our team at founders@sciphi.ai.
Citation
@misc{pimpalgaonkar2024triplex,
author = {Pimpalgaonkar, Shreyas and Tremelling, Nolan and Colegrove, Owen},
title = {Triplex: a SOTA LLM for knowledge graph construction},
year = {2024},
url = {https://huggingface.co/sciphi/triplex}
}
- Downloads last month
- 1,755

