-
Notifications
You must be signed in to change notification settings - Fork 10
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
function aggregate
NotImplementedError
#16
Comments
To use from dummy_spark import SparkContext instead of from pyspark import SparkContext If you import from |
@chenpeizhi sorry that was a typo. I got the error message when using
|
@hongzhouye I see. Are you using the original |
@chenpeizhi I used the one from PyPI. After switching to Jinmo's fork, no error was seen anymore. That also makes the However, when I tried to run the "advanced ccd" example from the same tutorial, i got segfault for the # When the gristmill package is also installed, the evaluation of the working
# equations can also be optimized with it.
eval_seq = optimize(
[e, r2], substs={no: 1000, nv: 10000}
) I had
which says the from pyspark import RDD, SparkContext Now comes my question: is this Thanks a lot! |
There is only one version of The import statements in Could you share where exactly in the tests do the above errors arise. That would certainly help identify the source; the tests and the "advanced ccd" tutorial run fine on my machine. |
Yep, switching to Jinmo's
The error I was referring to was seen when running tests in gristmill/tests. The reason I traced to
Note that I saw similar error messages when running some tests in
The error seems to occur at the stage of initializing the environment, i.e. dr = BogoliubovDrudge(spark_ctx) I suspect the problem is with my from dummy_spark import SparkContext # Jinmo's dummy_spark
from drudge import ReducedBCSDrudge
ctx = SparkContext()
dr = ReducedBCSDrudge(ctx) It gives no error, which seems to suggest at least those different Drudge's work well with Jinmo's |
We use the origional pyspark. On my computer, I use Python 3.7.4 and pyspark 2.4.3. We haven't tested pyspark 2.4.5 yet. It might be worth testing the older pyspark 2.4.3 on your computer to see whether anything changes.
|
@chenpeizhi Thanks for your response!
I tried uninstall 2.4.5 and reinstall
But when I tried to run some simple examples like estimating import random
NUM_SAMPLES = 100000000
def inside(p):
x, y = random.random(), random.random()
return x*x + y*y < 1
count = sc.parallelize(range(0, NUM_SAMPLES)).filter(inside).count()
pi = 4 * count / NUM_SAMPLES
print(“Pi is roughly”, pi) it failed at the
Since you mentioned that
is it possible to specifically ask the |
You may do export DUMMY_SPARK=1 before running the gristmill tests. |
There appears to be some incompatibility in Java/Spark. A simple way to avoid them would be to either use |
I tried this and rerun the "advanced ccd" tutorial script, it still failed with a segfault. I then tried rerunning the tests from Good news is that when I reran the tests in
with the following failure messages
You can see that they are all related to a call of the |
I also saw discussions about the conflict between Java and Spark. Some suggest to downgrade java to java8 (I am using java12 now), which I unfortunately cannot do since I have other programs depending on java. BTW what is the version of java on your machine? unlikely that the error is due to this but just want to make sure :D |
The The |
Second @chenpeizhi's comments. I am using openjdk 1.8.0_252, which is Java 8. Also, as pointed out above, if you want to use As for
each submodule in Drudge has its own rules for |
Sorry I did not make this clear. After using
You can see that it successfully derived the working equations but failed to optimize them. Just to be super clear, the output above was obtained by saving the following as """Configures a simple drudge for particle-hole model."""
from dummy_spark import SparkContext
from drudge import PartHoleDrudge
ctx = SparkContext()
dr = PartHoleDrudge(ctx)
dr.full_simplify = False
DRUDGE = dr and then running the python -m drudge conf_ph.py ccd_adv.drs |
Yes, running the tests after And I realized that the problem is with the # Derive the working equations by projection. Here we make them into tensor
# definition with explicit left-hand side, so that they can be used for
# optimization.
e <<= simplify(eval_fermi_vev(h_bar))
proj = c_dag[i] * c_dag[j] * c_[b] * c_[a]
r2[a, b, i, j] <<= simplify(eval_fermi_vev(proj * h_bar)) are actually fine after using Jinmo's from dummy_spark import SparkContext
from drudge import PartHoleDrudge
ctx = SparkContext()
dr = PartHoleDrudge(ctx)
dr.full_simplify = False
DRUDGE = dr It is the |
The segfault may be caused by one of the C++ libraries (cpypp, fbitset, libparenth) used by gristmill. Those libraries were compiled when the Python setup script was run. @hongzhouye Which C++ compiler did you use? |
@chenpeizhi gcc -Wno-unused-result -Wsign-compare -Wunreachable-code -DNDEBUG -g \
-fwrapv -O3 -Wall -Wstrict-prototypes -I/Users/hzye/local/opt/miniconda/envs/frank/include \
-arch x86_64 -I/Users/hzye/local/opt/miniconda/envs/frank/include -arch x86_64 \
-I/Users/hzye/local/opt/gristmill/deps/cpypp/include \
-I/Users/hzye/local/opt/gristmill/deps/fbitset/include \
-I/Users/hzye/local/opt/gristmill/deps/libparenth/include \
-I/Users/hzye/local/opt/miniconda/envs/frank/include/python3.6m -c gristmill/_parenth.cpp \
-o build/temp.macosx-10.7-x86_64-3.6/gristmill/_parenth.o -std=gnu++1z -stdlib=libc++ where
However, directly running the build gave an error:
which said export LIBRARY_PATH=${LIBRARY_PATH}:/path/to/libstdc++.a and then the build ran fine. After that Do you notice any glitches? |
@hongzhouye Nothing obvious. Some debugging is needed. @tschijnmo @gauravharsha Any idea? |
When I tried to run the following simple script by
python xxx.py
for sanity checkwhich is taken from the
conf_ph.py
from the tutorial. I got error msg:I checked the source code
rdd.py
fromdummy_spark
, and it seems that the functionaggregate
is indeed not implemented therein but being called bydrudge.py
. Does anyone know how this could be resolved?Thanks in advance!
The text was updated successfully, but these errors were encountered: