Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

在指定环境下无法运行代码 #2

Open
rogerxl-brennanhs opened this issue Dec 26, 2024 · 0 comments
Open

在指定环境下无法运行代码 #2

rogerxl-brennanhs opened this issue Dec 26, 2024 · 0 comments

Comments

@rogerxl-brennanhs
Copy link

rogerxl-brennanhs commented Dec 26, 2024

您好,感谢您的开源🙏。
我按照您的 requirements 配制了环境 torch==1.7.1+cu110、stanza==1.5.1,但是在运行 data_prepro_getGraph.py 代码会报错 UnboundLocalError: local variable 'doc' referenced before assignment, 且其原因是Exception: module 'torch' has no attribute 'take_along_dim'。请问应该如何解决?

torch.take_along_dim 方法似乎是在 PyTorch 1.10.0 的版本中才引入的? 我是需要更换 PyTorch 的版本么?还是更换 stanza 的版本?

def get_dependency(tokens):
    nlp = stanza.Pipeline(lang='en',
                          processors='tokenize,mwt,pos,lemma,depparse,constituency',
                          tokenize_pretokenized=True,
                          dir='/path/to/stanza_resources',
                          download_method=None)

    result = []
    result2 = []
    POS = []
    Pre_head = []
    for idx,token in enumerate(tokens):
        # sente =' '.join(['Pairing', 'it', 'with', 'an', 'iPhone', 'is', 'a', 'pure', 'pleasure', '-', 'talk', 'about', 'painless', 'syncing', '-', 'used', 'to', 'take', 'me', 'forever', '-', 'now', 'it', "'s", 'a', 'snap', '.'])
        # sente =' '.join(['she', 'is', 'a', 'beautiful', 'woman','.'])
        # sente = "It is really easy to use and it is quick to start up ."
        # sente = "great food but the service was dreadful!"


        sente = ' '.join(token)
        # if idx==627:
        #     print(idx)
        # token_list= ['The', 'wait', 'staff', 'is', 'pleasant', ',', 'fun', ',', 'and', 'for', 'the', 'most', 'part', 'gorgeous', '(', 'in', 'the', 'wonderful', 'aesthetic', 'beautification', 'way', ',', 'not', 'in', 'that', "she's-way-cuter-than-me-that-b", '@', '']
        # token_dicts = [{'id': str(i + 1), 'text': token} for i, token in enumerate(token)]
        try:
            doc = nlp(sente)
        except Exception as e:
            print(f"Error processing sentence at index {idx}: {token}")
            print(f"Exception: {e}")
        # break
        # 获取第一个句子的依存分析结果
        sent = doc.sentences
        dependencies = sent[0].dependencies
        dd = []
        pos = []
        for dependency in dependencies:
            this_word = dependency[2]

            token_id = this_word.id
            token_head_id = this_word.head
            token_dependency_label = this_word.deprel

            # 将依存关系转换成['root',1,2]形式
            if token_head_id == 0:
                dd.append(['root', token_head_id,token_id])
            else:
                dd.append([token_dependency_label, token_head_id, token_id])
            pos.append(this_word.pos)
        result.append(dd)
        dd2 = [""+e[0] for e in dd]
        POS.append(pos)
        result2.append(dd2)
        prehead = [e[1] for e in dd]
        Pre_head.append(prehead)
    return result2,result,POS,Pre_head
2024-12-26 12:40:24 WARNING: Can not find mwt: default from official model list. Ignoring it.
2024-12-26 12:40:24 INFO: Loading these models for language: en (English):
======================================
| Processor    | Package             |
--------------------------------------
| tokenize     | combined            |
| pos          | combined_charlm     |
| lemma        | combined_nocharlm   |
| constituency | ptb3-revised_charlm |
| depparse     | combined_charlm     |
======================================

2024-12-26 12:40:24 INFO: Using device: cuda
2024-12-26 12:40:24 INFO: Loading: tokenize
2024-12-26 12:40:24 INFO: Loading: pos
2024-12-26 12:40:28 INFO: Loading: lemma
2024-12-26 12:40:28 INFO: Loading: constituency
2024-12-26 12:40:29 INFO: Loading: depparse
2024-12-26 12:40:29 INFO: Done loading processors!
Error processing sentence at index 0: ['first', 'one', 'that', 'they', 'shipped', 'was', 'obviously', 'defective', ',', 'super', 'slow', 'and', 'speakers', 'were', 'garbled', '.']
Exception: module 'torch' has no attribute 'take_along_dim'
Traceback (most recent call last):
  File "/path/to/pycharm/EPMEI/data_prepro_getGraph.py", line 123, in <module>
    data_tackled = tackle_dataset(data_list)
  File "/path/to/pycharm/EPMEI/data_prepro_getGraph.py", line 65, in tackle_dataset
    predicted_dependencies, dependencies,POS,prehead = get_dependency(sentences)
  File "/path/to/pycharm/EPMEI/data_prepro_getGraph.py", line 35, in get_dependency
    sent = doc.sentences
UnboundLocalError: local variable 'doc' referenced before assignment
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant