Skip to content

[WIP] Phi3poc #2301

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 69 commits into from
Apr 10, 2025
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
69 commits
Select commit Hold shift + click to select a range
f0c2b00
poc
JessicaXYWang Sep 12, 2024
603777a
poc
JessicaXYWang Oct 15, 2024
47ae241
Merge branch 'master' into phi3poc
JessicaXYWang Oct 15, 2024
23f8ca0
rename module
JessicaXYWang Oct 15, 2024
bb5b2b6
Merge branch 'phi3poc' of https://github.com/JessicaXYWang/SynapseML …
JessicaXYWang Oct 15, 2024
f235535
update dependency
JessicaXYWang Oct 17, 2024
f2ab308
Merge branch 'master' into phi3poc
JessicaXYWang Oct 17, 2024
3ee9168
add set device type
JessicaXYWang Oct 21, 2024
b30f168
add Downloader
JessicaXYWang Jan 2, 2025
d760733
remove import
JessicaXYWang Jan 2, 2025
6efa59c
Merge branch 'master' into phi3poc
JessicaXYWang Jan 2, 2025
c7397f3
update lm
JessicaXYWang Jan 10, 2025
e1105fd
Merge branch 'phi3poc' of https://github.com/JessicaXYWang/SynapseML …
JessicaXYWang Jan 10, 2025
e59a981
Merge branch 'master' into phi3poc
JessicaXYWang Jan 10, 2025
ff8ad7f
pyarrow version conflict
JessicaXYWang Jan 13, 2025
56e623d
Merge branch 'phi3poc' of https://github.com/JessicaXYWang/SynapseML …
JessicaXYWang Jan 13, 2025
efa6aa0
update transformers version
JessicaXYWang Jan 14, 2025
2f5338c
add dependency
JessicaXYWang Jan 14, 2025
ff89511
update transformers version
JessicaXYWang Jan 14, 2025
b3dc5da
add phi3 test
JessicaXYWang Jan 16, 2025
c0cd463
test missing transformers library
JessicaXYWang Jan 16, 2025
e3e331c
update databricks test
JessicaXYWang Jan 16, 2025
382a20e
update databricks test
JessicaXYWang Jan 16, 2025
0a0f80c
update db library
JessicaXYWang Jan 17, 2025
eac0293
update doc
JessicaXYWang Jan 23, 2025
7a3e315
format
JessicaXYWang Jan 23, 2025
465161a
add broadcast model
JessicaXYWang Mar 3, 2025
4c059dc
Merge branch 'master' into phi3poc
JessicaXYWang Mar 3, 2025
4b91579
temporarily remove horovod for testing
JessicaXYWang Mar 4, 2025
caf6de7
Merge branch 'phi3poc' of https://github.com/JessicaXYWang/SynapseML …
JessicaXYWang Mar 4, 2025
346615f
test with previous transformers version
JessicaXYWang Mar 4, 2025
72aa18e
test
JessicaXYWang Mar 4, 2025
87edc13
test env
JessicaXYWang Mar 5, 2025
5fcb372
test
JessicaXYWang Mar 5, 2025
068bd99
test
JessicaXYWang Mar 6, 2025
8282df4
test
JessicaXYWang Mar 6, 2025
0d7aafd
test
JessicaXYWang Mar 6, 2025
1a5a9b6
fix broadcasting
JessicaXYWang Mar 28, 2025
95079dd
Merge branch 'master' into phi3poc
mhamilton723 Mar 28, 2025
869bad8
update dependency
JessicaXYWang Mar 31, 2025
a2fb10b
Merge branch 'phi3poc' of https://github.com/JessicaXYWang/SynapseML …
JessicaXYWang Mar 31, 2025
2786c90
test without hadoop client api
JessicaXYWang Apr 1, 2025
10f99f4
Merge branch 'master' into phi3poc
JessicaXYWang Apr 1, 2025
21feb43
update ubuntu version
JessicaXYWang Apr 1, 2025
53cefd9
Merge branch 'phi3poc' of https://github.com/JessicaXYWang/SynapseML …
JessicaXYWang Apr 1, 2025
4f3c20d
update E2E phi3 test
JessicaXYWang Apr 1, 2025
36eb25f
exclude phi3 synapse test
JessicaXYWang Apr 2, 2025
d20546b
bug fix
JessicaXYWang Apr 2, 2025
4038df5
Merge branch 'master' into phi3poc
JessicaXYWang Apr 3, 2025
d05ff76
fix style
JessicaXYWang Apr 3, 2025
8bf8c11
Merge branch 'phi3poc' of https://github.com/JessicaXYWang/SynapseML …
JessicaXYWang Apr 3, 2025
cf7fb78
format
JessicaXYWang Apr 3, 2025
a0e1cca
add gpu test library
JessicaXYWang Apr 3, 2025
9738d83
update causallm
JessicaXYWang Apr 6, 2025
9babd03
Merge branch 'master' into phi3poc
JessicaXYWang Apr 6, 2025
86fb464
update model
JessicaXYWang Apr 7, 2025
253aaad
Merge branch 'phi3poc' of https://github.com/JessicaXYWang/SynapseML …
JessicaXYWang Apr 7, 2025
b6fa41f
bug fix
JessicaXYWang Apr 7, 2025
6392848
add phi4 to e2e, update transformers version
JessicaXYWang Apr 9, 2025
f1bb367
update env
JessicaXYWang Apr 9, 2025
70265af
add dependency
JessicaXYWang Apr 9, 2025
c0fe909
update phi e2e
JessicaXYWang Apr 9, 2025
9f96240
Merge branch 'master' into phi3poc
JessicaXYWang Apr 9, 2025
9e37dee
increase timeout
JessicaXYWang Apr 10, 2025
19ce5ba
Merge branch 'phi3poc' of https://github.com/JessicaXYWang/SynapseML …
JessicaXYWang Apr 10, 2025
e139136
test run
JessicaXYWang Apr 10, 2025
29df6f8
reduce concurrency on gpu cluster
JessicaXYWang Apr 10, 2025
51b42e5
format
JessicaXYWang Apr 10, 2025
bdd7b15
Update core/src/test/scala/com/microsoft/azure/synapse/ml/nbtest/Data…
mhamilton723 Apr 10, 2025
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
331 changes: 331 additions & 0 deletions core/src/main/python/synapse/ml/llm/HuggingFaceCausallmTransform.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,331 @@
import os

from pyspark import keyword_only
from pyspark.ml import Transformer
from pyspark.ml.param.shared import (
HasInputCol,
HasOutputCol,
Param,
Params,
TypeConverters,
)
from pyspark.ml.util import DefaultParamsReadable, DefaultParamsWritable
from pyspark.sql import Row, SparkSession
from pyspark.sql.types import StringType, StructField, StructType
from transformers import AutoModelForCausalLM, AutoTokenizer


class _PeekableIterator:
def __init__(self, iterable):
self._iterator = iter(iterable)
self._cache = []

def __iter__(self):
return self

def __next__(self):
if self._cache:
return self._cache.pop(0)
else:
return next(self._iterator)

def peek(self, n=1):
"""Peek at the next n elements without consuming them."""
while len(self._cache) < n:
try:
self._cache.append(next(self._iterator))
except StopIteration:
break
if n == 1:
return self._cache[0] if self._cache else None
else:
return self._cache[:n]


class _ModelParam:
def __init__(self, **kwargs):
self.param = {}
self.param.update(kwargs)

def get_param(self):
return self.param


class _ModelConfig:
def __init__(self, **kwargs):
self.config = {}
self.config.update(kwargs)

def get_config(self):
return self.config

def set_config(self, **kwargs):
self.config.update(kwargs)


def broadcast_model(cachePath, modelConfig):
bc_computable = _BroadcastableModel(cachePath, modelConfig)
sc = SparkSession.builder.getOrCreate().sparkContext
return sc.broadcast(bc_computable)


class _BroadcastableModel:
def __init__(self, model_path=None, model_config=None):
self.model_path = model_path
self.model = None
self.tokenizer = None
self.model_config = model_config

def load_model(self):
if self.model_path and os.path.exists(self.model_path):
model_config = self.model_config.get_config()
self.model = AutoModelForCausalLM.from_pretrained(
self.model_path, local_files_only=True, **model_config
)
self.tokenizer = AutoTokenizer.from_pretrained(
self.model_path, local_files_only=True
)
else:
raise ValueError(f"Model path {self.model_path} does not exist.")

def __getstate__(self):
return {"model_path": self.model_path, "model_config": self.model_config}

def __setstate__(self, state):
self.model_path = state.get("model_path")
self.model_config = state.get("model_config")
self.model = None
self.tokenizer = None
if self.model_path:
self.load_model()


class HuggingFaceCausalLM(
Transformer, HasInputCol, HasOutputCol, DefaultParamsReadable, DefaultParamsWritable
):

modelName = Param(
Params._dummy(),
"modelName",
"huggingface causal lm model name",
typeConverter=TypeConverters.toString,
)
inputCol = Param(
Params._dummy(),
"inputCol",
"input column",
typeConverter=TypeConverters.toString,
)
outputCol = Param(
Params._dummy(),
"outputCol",
"output column",
typeConverter=TypeConverters.toString,
)
task = Param(
Params._dummy(),
"task",
"Specifies the task, can be chat or completion.",
typeConverter=TypeConverters.toString,
)
modelParam = Param(
Params._dummy(),
"modelParam",
"Model Parameters, passed to .generate(). For more details, check https://huggingface.co/docs/transformers/en/main_classes/text_generation#transformers.GenerationConfig",
)
modelConfig = Param(
Params._dummy(),
"modelConfig",
"Model configuration, passed to AutoModelForCausalLM.from_pretrained(). For more details, check https://huggingface.co/docs/transformers/en/model_doc/auto#transformers.AutoModelForCausalLM",
)
cachePath = Param(
Params._dummy(),
"cachePath",
"cache path for the model. A shared location between the workers, could be a lakehouse path",
typeConverter=TypeConverters.toString,
)
deviceMap = Param(
Params._dummy(),
"deviceMap",
"Specifies a model parameter for the device map. It can also be set with modelParam. Commonly used values include 'auto', 'cuda', or 'cpu'. You may want to check your model documentation for device map",
typeConverter=TypeConverters.toString,
)
torchDtype = Param(
Params._dummy(),
"torchDtype",
"Specifies a model parameter for the torch dtype. It can be set with modelParam. The most commonly used value is 'auto'. You may want to check your model documentation for torch dtype.",
typeConverter=TypeConverters.toString,
)

@keyword_only
def __init__(
self,
modelName=None,
inputCol=None,
outputCol=None,
task="chat",
cachePath=None,
deviceMap=None,
torchDtype=None,
):
super(HuggingFaceCausalLM, self).__init__()
self._setDefault(
modelName=modelName,
inputCol=inputCol,
outputCol=outputCol,
modelParam=_ModelParam(),
modelConfig=_ModelConfig(),
task=task,
cachePath=None,
deviceMap=None,
torchDtype=None,
)
kwargs = self._input_kwargs
self.setParams(**kwargs)

@keyword_only
def setParams(self):
kwargs = self._input_kwargs
return self._set(**kwargs)

def setModelName(self, value):
return self._set(modelName=value)

def getModelName(self):
return self.getOrDefault(self.modelName)

def setInputCol(self, value):
return self._set(inputCol=value)

def getInputCol(self):
return self.getOrDefault(self.inputCol)

def setOutputCol(self, value):
return self._set(outputCol=value)

def getOutputCol(self):
return self.getOrDefault(self.outputCol)

def setModelParam(self, **kwargs):
param = _ModelParam(**kwargs)
return self._set(modelParam=param)

def getModelParam(self):
return self.getOrDefault(self.modelParam)

def setModelConfig(self, **kwargs):
config = _ModelConfig(**kwargs)
return self._set(modelConfig=config)

def getModelConfig(self):
return self.getOrDefault(self.modelConfig)

def setTask(self, value):
supported_values = ["completion", "chat"]
if value not in supported_values:
raise ValueError(
f"Task must be one of {supported_values}, but got '{value}'."
)
return self._set(task=value)

def getTask(self):
return self.getOrDefault(self.task)

def setCachePath(self, value):
return self._set(cachePath=value)

def getCachePath(self):
return self.getOrDefault(self.cachePath)

def setDeviceMap(self, value):
return self._set(deviceMap=value)

def getDeviceMap(self):
return self.getOrDefault(self.deviceMap)

def setTorchDtype(self, value):
return self._set(torchDtype=value)

def getTorchDtype(self):
return self.getOrDefault(self.torchDtype)

def getBCObject(self):
return self.bcObject

def _predict_single_completion(self, prompt, model, tokenizer):
param = self.getModelParam().get_param()
inputs = tokenizer(prompt, return_tensors="pt").input_ids
outputs = model.generate(inputs, **param)
decoded_output = tokenizer.batch_decode(outputs, skip_special_tokens=True)[0]
return decoded_output

def _predict_single_chat(self, prompt, model, tokenizer):
param = self.getModelParam().get_param()
if isinstance(prompt, list):
chat = prompt
else:
chat = [{"role": "user", "content": prompt}]
formatted_chat = tokenizer.apply_chat_template(
chat, tokenize=False, add_generation_prompt=True
)
tokenized_chat = tokenizer(
formatted_chat, return_tensors="pt", add_special_tokens=False
)
inputs = {
key: tensor.to(model.device) for key, tensor in tokenized_chat.items()
}
merged_inputs = {**inputs, **param}
outputs = model.generate(**merged_inputs)
decoded_output = tokenizer.decode(
outputs[0][inputs["input_ids"].size(1) :], skip_special_tokens=True
)
return decoded_output

def _process_partition(self, iterator, bc_object):
"""Process each partition of the data."""
peekable_iterator = _PeekableIterator(iterator)
try:
first_row = peekable_iterator.peek()
except StopIteration:
return None

if bc_object:
lc_object = bc_object.value
model = lc_object.model
tokenizer = lc_object.tokenizer
else:
model_name = self.getModelName()
model_config = self.getModelConfig().get_config()
model = AutoModelForCausalLM.from_pretrained(model_name, **model_config)
tokenizer = AutoTokenizer.from_pretrained(model_name)

task = self.getTask() if self.getTask() else "chat"

for row in peekable_iterator:
prompt = row[self.getInputCol()]
if task == "chat":
result = self._predict_single_chat(prompt, model, tokenizer)
elif task == "completion":
result = self._predict_single_completion(prompt, model, tokenizer)
else:
raise ValueError(
f"Unsupported task '{task}'. Supported tasks are 'chat' and 'completion'."
)
row_dict = row.asDict()
row_dict[self.getOutputCol()] = result
yield Row(**row_dict)

def _transform(self, dataset):
if self.getCachePath():
bc_object = broadcast_model(self.getCachePath(), self.getModelConfig())
else:
bc_object = None
input_schema = dataset.schema
output_schema = StructType(
input_schema.fields + [StructField(self.getOutputCol(), StringType(), True)]
)
result_rdd = dataset.rdd.mapPartitions(
lambda partition: self._process_partition(partition, bc_object)
)
result_df = result_rdd.toDF(output_schema)
return result_df
Original file line number Diff line number Diff line change
Expand Up @@ -14,7 +14,7 @@ class DatabricksGPUTests extends DatabricksTestHelper {

val clusterId: String = createClusterInPool(GPUClusterName, AdbGpuRuntime, 2, GpuPoolId)

databricksTestHelper(clusterId, GPULibraries, GPUNotebooks)
databricksTestHelper(clusterId, GPULibraries, GPUNotebooks, 1)

protected override def afterAll(): Unit = {
afterAllHelper(clusterId, GPUClusterName)
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -84,9 +84,11 @@ object DatabricksUtilities {
Map("maven" -> Map("coordinates" -> PackageMavenCoordinate, "repo" -> PackageRepository)),
Map("pypi" -> Map("package" -> "pytorch-lightning==1.5.0")),
Map("pypi" -> Map("package" -> "torchvision==0.14.1")),
Map("pypi" -> Map("package" -> "transformers==4.32.1")),
Map("pypi" -> Map("package" -> "transformers==4.49.0")),
Map("pypi" -> Map("package" -> "jinja2==3.1.0")),
Map("pypi" -> Map("package" -> "petastorm==0.12.0")),
Map("pypi" -> Map("package" -> "protobuf==3.20.3"))
Map("pypi" -> Map("package" -> "protobuf==3.20.3")),
Map("pypi" -> Map("package" -> "accelerate==0.26.0"))
).toJson.compactPrint

val RapidsInitScripts: String = List(
Expand All @@ -105,12 +107,16 @@ object DatabricksUtilities {
val CPUNotebooks: Seq[File] = ParallelizableNotebooks
.filterNot(_.getAbsolutePath.contains("Fine-tune"))
.filterNot(_.getAbsolutePath.contains("GPU"))
.filterNot(_.getAbsolutePath.contains("Phi Model"))
.filterNot(_.getAbsolutePath.contains("Language Model"))
.filterNot(_.getAbsolutePath.contains("Multivariate Anomaly Detection")) // Deprecated
.filterNot(_.getAbsolutePath.contains("Audiobooks")) // TODO Remove this by fixing auth
.filterNot(_.getAbsolutePath.contains("Art")) // TODO Remove this by fixing performance
.filterNot(_.getAbsolutePath.contains("Explanation Dashboard")) // TODO Remove this exclusion

val GPUNotebooks: Seq[File] = ParallelizableNotebooks.filter(_.getAbsolutePath.contains("Fine-tune"))
val GPUNotebooks: Seq[File] = ParallelizableNotebooks.filter { file =>
file.getAbsolutePath.contains("Fine-tune") || file.getAbsolutePath.contains("Phi Model")
}

val RapidsNotebooks: Seq[File] = ParallelizableNotebooks.filter(_.getAbsolutePath.contains("GPU"))

Expand Down Expand Up @@ -427,7 +433,8 @@ abstract class DatabricksTestHelper extends TestBase {

def databricksTestHelper(clusterId: String,
libraries: String,
notebooks: Seq[File]): Unit = {
notebooks: Seq[File],
maxConcurrency: Int = 8): Unit = {

println("Checking if cluster is active")
tryWithRetries(Seq.fill(60 * 20)(1000).toArray) { () =>
Expand All @@ -443,7 +450,6 @@ abstract class DatabricksTestHelper extends TestBase {

assert(notebooks.nonEmpty)

val maxConcurrency = 8
val executorService = Executors.newFixedThreadPool(maxConcurrency)
implicit val executionContext: ExecutionContext = ExecutionContext.fromExecutor(executorService)

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -48,6 +48,7 @@ class SynapseTests extends TestBase {
.filter(_.getAbsolutePath.endsWith(".py"))
.filterNot(_.getAbsolutePath.contains("Finetune")) // Excluded by design task 1829306
.filterNot(_.getAbsolutePath.contains("GPU"))
.filterNot(_.getAbsolutePath.contains("PhiModel"))
.filterNot(_.getAbsolutePath.contains("VWnativeFormat"))
.filterNot(_.getAbsolutePath.contains("VowpalWabbitMulticlassclassification")) // Wait for Synapse fix
.filterNot(_.getAbsolutePath.contains("Langchain")) // Wait for Synapse fix
Expand Down
Loading
Loading