-
Notifications
You must be signed in to change notification settings - Fork 848
[WIP] Phi3poc #2301
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Merged
Merged
[WIP] Phi3poc #2301
Changes from all commits
Commits
Show all changes
69 commits
Select commit
Hold shift + click to select a range
f0c2b00
poc
JessicaXYWang 603777a
poc
JessicaXYWang 47ae241
Merge branch 'master' into phi3poc
JessicaXYWang 23f8ca0
rename module
JessicaXYWang bb5b2b6
Merge branch 'phi3poc' of https://github.com/JessicaXYWang/SynapseML …
JessicaXYWang f235535
update dependency
JessicaXYWang f2ab308
Merge branch 'master' into phi3poc
JessicaXYWang 3ee9168
add set device type
JessicaXYWang b30f168
add Downloader
JessicaXYWang d760733
remove import
JessicaXYWang 6efa59c
Merge branch 'master' into phi3poc
JessicaXYWang c7397f3
update lm
JessicaXYWang e1105fd
Merge branch 'phi3poc' of https://github.com/JessicaXYWang/SynapseML …
JessicaXYWang e59a981
Merge branch 'master' into phi3poc
JessicaXYWang ff8ad7f
pyarrow version conflict
JessicaXYWang 56e623d
Merge branch 'phi3poc' of https://github.com/JessicaXYWang/SynapseML …
JessicaXYWang efa6aa0
update transformers version
JessicaXYWang 2f5338c
add dependency
JessicaXYWang ff89511
update transformers version
JessicaXYWang b3dc5da
add phi3 test
JessicaXYWang c0cd463
test missing transformers library
JessicaXYWang e3e331c
update databricks test
JessicaXYWang 382a20e
update databricks test
JessicaXYWang 0a0f80c
update db library
JessicaXYWang eac0293
update doc
JessicaXYWang 7a3e315
format
JessicaXYWang 465161a
add broadcast model
JessicaXYWang 4c059dc
Merge branch 'master' into phi3poc
JessicaXYWang 4b91579
temporarily remove horovod for testing
JessicaXYWang caf6de7
Merge branch 'phi3poc' of https://github.com/JessicaXYWang/SynapseML …
JessicaXYWang 346615f
test with previous transformers version
JessicaXYWang 72aa18e
test
JessicaXYWang 87edc13
test env
JessicaXYWang 5fcb372
test
JessicaXYWang 068bd99
test
JessicaXYWang 8282df4
test
JessicaXYWang 0d7aafd
test
JessicaXYWang 1a5a9b6
fix broadcasting
JessicaXYWang 95079dd
Merge branch 'master' into phi3poc
mhamilton723 869bad8
update dependency
JessicaXYWang a2fb10b
Merge branch 'phi3poc' of https://github.com/JessicaXYWang/SynapseML …
JessicaXYWang 2786c90
test without hadoop client api
JessicaXYWang 10f99f4
Merge branch 'master' into phi3poc
JessicaXYWang 21feb43
update ubuntu version
JessicaXYWang 53cefd9
Merge branch 'phi3poc' of https://github.com/JessicaXYWang/SynapseML …
JessicaXYWang 4f3c20d
update E2E phi3 test
JessicaXYWang 36eb25f
exclude phi3 synapse test
JessicaXYWang d20546b
bug fix
JessicaXYWang 4038df5
Merge branch 'master' into phi3poc
JessicaXYWang d05ff76
fix style
JessicaXYWang 8bf8c11
Merge branch 'phi3poc' of https://github.com/JessicaXYWang/SynapseML …
JessicaXYWang cf7fb78
format
JessicaXYWang a0e1cca
add gpu test library
JessicaXYWang 9738d83
update causallm
JessicaXYWang 9babd03
Merge branch 'master' into phi3poc
JessicaXYWang 86fb464
update model
JessicaXYWang 253aaad
Merge branch 'phi3poc' of https://github.com/JessicaXYWang/SynapseML …
JessicaXYWang b6fa41f
bug fix
JessicaXYWang 6392848
add phi4 to e2e, update transformers version
JessicaXYWang f1bb367
update env
JessicaXYWang 70265af
add dependency
JessicaXYWang c0fe909
update phi e2e
JessicaXYWang 9f96240
Merge branch 'master' into phi3poc
JessicaXYWang 9e37dee
increase timeout
JessicaXYWang 19ce5ba
Merge branch 'phi3poc' of https://github.com/JessicaXYWang/SynapseML …
JessicaXYWang e139136
test run
JessicaXYWang 29df6f8
reduce concurrency on gpu cluster
JessicaXYWang 51b42e5
format
JessicaXYWang bdd7b15
Update core/src/test/scala/com/microsoft/azure/synapse/ml/nbtest/Data…
mhamilton723 File filter
Filter by extension
Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
There are no files selected for viewing
331 changes: 331 additions & 0 deletions
331
core/src/main/python/synapse/ml/llm/HuggingFaceCausallmTransform.py
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,331 @@ | ||
import os | ||
|
||
from pyspark import keyword_only | ||
from pyspark.ml import Transformer | ||
from pyspark.ml.param.shared import ( | ||
HasInputCol, | ||
HasOutputCol, | ||
Param, | ||
Params, | ||
TypeConverters, | ||
) | ||
from pyspark.ml.util import DefaultParamsReadable, DefaultParamsWritable | ||
from pyspark.sql import Row, SparkSession | ||
from pyspark.sql.types import StringType, StructField, StructType | ||
from transformers import AutoModelForCausalLM, AutoTokenizer | ||
|
||
|
||
class _PeekableIterator: | ||
def __init__(self, iterable): | ||
self._iterator = iter(iterable) | ||
self._cache = [] | ||
|
||
def __iter__(self): | ||
return self | ||
|
||
def __next__(self): | ||
if self._cache: | ||
return self._cache.pop(0) | ||
else: | ||
return next(self._iterator) | ||
|
||
def peek(self, n=1): | ||
"""Peek at the next n elements without consuming them.""" | ||
while len(self._cache) < n: | ||
try: | ||
self._cache.append(next(self._iterator)) | ||
except StopIteration: | ||
break | ||
if n == 1: | ||
return self._cache[0] if self._cache else None | ||
else: | ||
return self._cache[:n] | ||
|
||
|
||
class _ModelParam: | ||
def __init__(self, **kwargs): | ||
self.param = {} | ||
self.param.update(kwargs) | ||
|
||
def get_param(self): | ||
return self.param | ||
|
||
|
||
class _ModelConfig: | ||
def __init__(self, **kwargs): | ||
self.config = {} | ||
self.config.update(kwargs) | ||
|
||
def get_config(self): | ||
return self.config | ||
|
||
def set_config(self, **kwargs): | ||
self.config.update(kwargs) | ||
|
||
|
||
def broadcast_model(cachePath, modelConfig): | ||
bc_computable = _BroadcastableModel(cachePath, modelConfig) | ||
sc = SparkSession.builder.getOrCreate().sparkContext | ||
return sc.broadcast(bc_computable) | ||
|
||
|
||
class _BroadcastableModel: | ||
def __init__(self, model_path=None, model_config=None): | ||
self.model_path = model_path | ||
self.model = None | ||
self.tokenizer = None | ||
self.model_config = model_config | ||
|
||
def load_model(self): | ||
if self.model_path and os.path.exists(self.model_path): | ||
model_config = self.model_config.get_config() | ||
self.model = AutoModelForCausalLM.from_pretrained( | ||
self.model_path, local_files_only=True, **model_config | ||
) | ||
self.tokenizer = AutoTokenizer.from_pretrained( | ||
self.model_path, local_files_only=True | ||
) | ||
else: | ||
raise ValueError(f"Model path {self.model_path} does not exist.") | ||
|
||
def __getstate__(self): | ||
return {"model_path": self.model_path, "model_config": self.model_config} | ||
|
||
def __setstate__(self, state): | ||
self.model_path = state.get("model_path") | ||
self.model_config = state.get("model_config") | ||
self.model = None | ||
self.tokenizer = None | ||
if self.model_path: | ||
self.load_model() | ||
|
||
|
||
class HuggingFaceCausalLM( | ||
Transformer, HasInputCol, HasOutputCol, DefaultParamsReadable, DefaultParamsWritable | ||
): | ||
|
||
modelName = Param( | ||
Params._dummy(), | ||
"modelName", | ||
"huggingface causal lm model name", | ||
typeConverter=TypeConverters.toString, | ||
) | ||
inputCol = Param( | ||
Params._dummy(), | ||
"inputCol", | ||
"input column", | ||
typeConverter=TypeConverters.toString, | ||
) | ||
outputCol = Param( | ||
Params._dummy(), | ||
"outputCol", | ||
"output column", | ||
typeConverter=TypeConverters.toString, | ||
) | ||
task = Param( | ||
Params._dummy(), | ||
"task", | ||
"Specifies the task, can be chat or completion.", | ||
typeConverter=TypeConverters.toString, | ||
) | ||
modelParam = Param( | ||
Params._dummy(), | ||
"modelParam", | ||
"Model Parameters, passed to .generate(). For more details, check https://huggingface.co/docs/transformers/en/main_classes/text_generation#transformers.GenerationConfig", | ||
) | ||
modelConfig = Param( | ||
Params._dummy(), | ||
"modelConfig", | ||
"Model configuration, passed to AutoModelForCausalLM.from_pretrained(). For more details, check https://huggingface.co/docs/transformers/en/model_doc/auto#transformers.AutoModelForCausalLM", | ||
) | ||
cachePath = Param( | ||
Params._dummy(), | ||
"cachePath", | ||
"cache path for the model. A shared location between the workers, could be a lakehouse path", | ||
typeConverter=TypeConverters.toString, | ||
) | ||
deviceMap = Param( | ||
Params._dummy(), | ||
"deviceMap", | ||
"Specifies a model parameter for the device map. It can also be set with modelParam. Commonly used values include 'auto', 'cuda', or 'cpu'. You may want to check your model documentation for device map", | ||
typeConverter=TypeConverters.toString, | ||
) | ||
torchDtype = Param( | ||
Params._dummy(), | ||
"torchDtype", | ||
"Specifies a model parameter for the torch dtype. It can be set with modelParam. The most commonly used value is 'auto'. You may want to check your model documentation for torch dtype.", | ||
typeConverter=TypeConverters.toString, | ||
) | ||
|
||
@keyword_only | ||
def __init__( | ||
self, | ||
modelName=None, | ||
inputCol=None, | ||
outputCol=None, | ||
task="chat", | ||
cachePath=None, | ||
deviceMap=None, | ||
torchDtype=None, | ||
): | ||
super(HuggingFaceCausalLM, self).__init__() | ||
self._setDefault( | ||
modelName=modelName, | ||
inputCol=inputCol, | ||
outputCol=outputCol, | ||
modelParam=_ModelParam(), | ||
modelConfig=_ModelConfig(), | ||
task=task, | ||
cachePath=None, | ||
deviceMap=None, | ||
torchDtype=None, | ||
) | ||
kwargs = self._input_kwargs | ||
self.setParams(**kwargs) | ||
|
||
@keyword_only | ||
def setParams(self): | ||
kwargs = self._input_kwargs | ||
return self._set(**kwargs) | ||
|
||
def setModelName(self, value): | ||
return self._set(modelName=value) | ||
|
||
def getModelName(self): | ||
return self.getOrDefault(self.modelName) | ||
|
||
def setInputCol(self, value): | ||
return self._set(inputCol=value) | ||
|
||
def getInputCol(self): | ||
return self.getOrDefault(self.inputCol) | ||
|
||
def setOutputCol(self, value): | ||
return self._set(outputCol=value) | ||
|
||
def getOutputCol(self): | ||
return self.getOrDefault(self.outputCol) | ||
|
||
def setModelParam(self, **kwargs): | ||
param = _ModelParam(**kwargs) | ||
return self._set(modelParam=param) | ||
|
||
def getModelParam(self): | ||
return self.getOrDefault(self.modelParam) | ||
|
||
def setModelConfig(self, **kwargs): | ||
config = _ModelConfig(**kwargs) | ||
return self._set(modelConfig=config) | ||
|
||
def getModelConfig(self): | ||
return self.getOrDefault(self.modelConfig) | ||
|
||
def setTask(self, value): | ||
supported_values = ["completion", "chat"] | ||
if value not in supported_values: | ||
raise ValueError( | ||
f"Task must be one of {supported_values}, but got '{value}'." | ||
) | ||
return self._set(task=value) | ||
|
||
def getTask(self): | ||
return self.getOrDefault(self.task) | ||
|
||
def setCachePath(self, value): | ||
return self._set(cachePath=value) | ||
|
||
def getCachePath(self): | ||
return self.getOrDefault(self.cachePath) | ||
|
||
def setDeviceMap(self, value): | ||
return self._set(deviceMap=value) | ||
|
||
def getDeviceMap(self): | ||
return self.getOrDefault(self.deviceMap) | ||
|
||
def setTorchDtype(self, value): | ||
return self._set(torchDtype=value) | ||
|
||
def getTorchDtype(self): | ||
return self.getOrDefault(self.torchDtype) | ||
|
||
def getBCObject(self): | ||
return self.bcObject | ||
|
||
def _predict_single_completion(self, prompt, model, tokenizer): | ||
param = self.getModelParam().get_param() | ||
inputs = tokenizer(prompt, return_tensors="pt").input_ids | ||
outputs = model.generate(inputs, **param) | ||
decoded_output = tokenizer.batch_decode(outputs, skip_special_tokens=True)[0] | ||
return decoded_output | ||
|
||
def _predict_single_chat(self, prompt, model, tokenizer): | ||
param = self.getModelParam().get_param() | ||
if isinstance(prompt, list): | ||
chat = prompt | ||
else: | ||
chat = [{"role": "user", "content": prompt}] | ||
formatted_chat = tokenizer.apply_chat_template( | ||
chat, tokenize=False, add_generation_prompt=True | ||
) | ||
tokenized_chat = tokenizer( | ||
formatted_chat, return_tensors="pt", add_special_tokens=False | ||
) | ||
inputs = { | ||
key: tensor.to(model.device) for key, tensor in tokenized_chat.items() | ||
} | ||
merged_inputs = {**inputs, **param} | ||
outputs = model.generate(**merged_inputs) | ||
decoded_output = tokenizer.decode( | ||
outputs[0][inputs["input_ids"].size(1) :], skip_special_tokens=True | ||
) | ||
return decoded_output | ||
|
||
def _process_partition(self, iterator, bc_object): | ||
"""Process each partition of the data.""" | ||
peekable_iterator = _PeekableIterator(iterator) | ||
try: | ||
first_row = peekable_iterator.peek() | ||
except StopIteration: | ||
return None | ||
|
||
if bc_object: | ||
lc_object = bc_object.value | ||
model = lc_object.model | ||
tokenizer = lc_object.tokenizer | ||
else: | ||
model_name = self.getModelName() | ||
model_config = self.getModelConfig().get_config() | ||
model = AutoModelForCausalLM.from_pretrained(model_name, **model_config) | ||
tokenizer = AutoTokenizer.from_pretrained(model_name) | ||
|
||
task = self.getTask() if self.getTask() else "chat" | ||
|
||
for row in peekable_iterator: | ||
prompt = row[self.getInputCol()] | ||
if task == "chat": | ||
result = self._predict_single_chat(prompt, model, tokenizer) | ||
elif task == "completion": | ||
result = self._predict_single_completion(prompt, model, tokenizer) | ||
else: | ||
raise ValueError( | ||
f"Unsupported task '{task}'. Supported tasks are 'chat' and 'completion'." | ||
) | ||
row_dict = row.asDict() | ||
row_dict[self.getOutputCol()] = result | ||
yield Row(**row_dict) | ||
|
||
def _transform(self, dataset): | ||
if self.getCachePath(): | ||
bc_object = broadcast_model(self.getCachePath(), self.getModelConfig()) | ||
else: | ||
bc_object = None | ||
input_schema = dataset.schema | ||
output_schema = StructType( | ||
input_schema.fields + [StructField(self.getOutputCol(), StringType(), True)] | ||
) | ||
result_rdd = dataset.rdd.mapPartitions( | ||
lambda partition: self._process_partition(partition, bc_object) | ||
) | ||
result_df = result_rdd.toDF(output_schema) | ||
return result_df |
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Oops, something went wrong.
Oops, something went wrong.
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
Uh oh!
There was an error while loading. Please reload this page.