Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Stata 17 crashes when using iebaltab with 4.3GB dataset #368

Open
paulasanematsu opened this issue Nov 7, 2024 · 1 comment
Open

Stata 17 crashes when using iebaltab with 4.3GB dataset #368

paulasanematsu opened this issue Nov 7, 2024 · 1 comment

Comments

@paulasanematsu
Copy link

paulasanematsu commented Nov 7, 2024

Hello,

I am a Research Computing Facilitator at FASRC. Raul Duarte reached out to our support because he was running a Stata code with the function iebaltab on our cluster and the job was dying midway through computation. We troubleshot extensively without much progress, so we are reaching out to you for guidance. I will try to summarize the computational environment and what we have done so far.

Unfortunately, because Raul’s data cannot be shared (because of a Data Use Agreement [DUA] signed), we cannot share the data, but we will try to explain as much as possible.

Computational environment

  • OS: Rocky Linux 8.9
  • Hardware (for more details, see https://docs.rc.fas.harvard.edu/kb/fasse/#SLURM_and_Partitions):
    • fasse_bigmem partition: Intel Ice Lake chipset, 499 GB of RAM, /tmp space is 172 GB
    • fasse_ultramem partition: Intel Ice Lake chipset, 2000 GB of RAM, /tmp space is 396 GB
  • Stata: version 17.0 with MP (64 cores)

Analysis

Raul wrote a Do file that uses the iebaltab function to analyze a dataset that is 4.3GB:

iebaltab median_hs6_unit_price median_hs6_cifdoldecla median_hs6_imponiblegs unit_price_final cifdoldecla imponiblegs, replace grpvar(val_count_patronage_hire) fixedeffect(port_day_ID) ///
	savetex("$DirOutFasse\baltab_val_shipment_item_values_counter_day.tex") ///
	grplabels(0 "Non-patronage" @ 1 "Patronage")  format(%12.0fc) order(1 0) ///
	rowlabels(median_hs6_unit_price "Median HS6 unit price (in USD)" @ median_hs6_cifdoldecla "Median HS6 CIF value (in USD)" ///
		@ median_hs6_imponiblegs "Median HS6 tax base (in PYG)" @ unit_price_final "Unit price (in USD)" ///
		@ cifdoldecla "Declared CIF value (in USD)" @ imponiblegs "Tax base (in PYG)") nonote

Raul wrote:

This line uses the following command to create a balance table. My dataset is a database of imports and for the balance table tests of difference between two groups (patronage and non-patronage) handling shipment items I want to include port-day fixed effects (and since I have 5 years of data and 31 customs ports), this could lead to more than 56,000 fixed effects included, which seems to be what is leading to problems, as the balance table does run without the fixed effects.

His typical run was on fasse_bigmem (499 GB of RAM and 64 cores).

Troubleshooting steps

  1. On the Stata GUI, Raul tried the following:
    1. To rule out-of-memory errors, he tested the Do-file on our computer with 2000 GB of RAM and 64 cores and still ran into the same problem.
    2. Successfully ran the Do-file with iebaltab on a subset of his original dataset. The subset is a 5% random sample of the original dataset.
    3. Checked that he is not exceeding any of the Stata settings
    4. Set the max_memory to slightly less than the total memory, he set it to 495 GB when the memory requested on fasse_bigmem was 499 GB.
    5. Tried to run with Stata-SE using a single core, but Stata threw an error that it could not handle as many variables with the SE version.
    6. I suggested using debugging mode (https://www.stata.com/support/faqs/programming/debugging-program/), but that has not helped to provide more valuable information about the error
  2. On the command line, I submitted a job via the scheduler to run the same Do-file using the original dataset
    1. While the job was running, I used top to see cpu and memory usage and I also kept checking the disk usage of /tmp with the du command. The core usage was almost at 100% for all 64 cores, memory was at about 5-6% (of 499 GB), and /tmp had about 4-5 GB usage. At about 1h, I could see each process dying and everything stalled.

I am hoping that you have some guidance if Raul possibly ran into a bug or something on our end that we need to change.

Thank you for taking the time to read this. We will be happy to answer any questions.

Best,
Paula and Raul

@kbjarkefur
Copy link
Contributor

Wow, you are really putting our code to the test. Fun!

Here are my first reactions to what you have already tested:

  • It does not seem to be an issue with memory. While that is good, it is usually the kind of issue that can be solved by improving the code. The command uses temp files to store intermediate results. I paid a lot of attention to ensuring no longer relevant temp files are deleted.
  • The high dimensionality of this equation (stemming from 56,000 fixed effects on a large number of observations) would create a significant load for the CPU. And you are saying that the CPU is at 100%. This observation makes sense, but it does not explain why the process would stop. The CPU should be able to handle the long queue of tasks and process them as capacity becomes available.

Questions:

  • Do you have a GPU-enabled cluster? GPUs are better suited to handle very large throughputs of computational tasks. However, iebaltab does not implement any GPU support beyond what Stata's built-in commands support. Therefore, it is hard for me to say how much of a difference this would make.
  • I do not think this is likely to be the issue on FARSC's cluster, but I'm still curious about what we can learn regarding timeouts or other constraints.
    • You say the processes die down after ~1 hour. How exactly 1 hour? And is that time consistent regardless of the workload? What if you were to randomize a sub-sample (perhaps 80% or 50%)? Would the process fail at a very similar point in time? This could suggest an issue with some time-out.
    • This question pushes my understanding of CPUs, but all CPUs have advanced task managers. In GPUs, this is much simpler, but GPUs cannot handle all types of tasks. However, matrix multiplication in regression estimations is something GPUs handle very well. Since the task manager needs to manage an extremely large queue, could it somehow run out of capacity? I do not think this computation is the largest FARSC has ever seen, but perhaps it is the largest one involving Stata? There might be something happening at the intersection of Stata and the task manager. Especially Stata on Linux systems which is the least used version of Stata and therefore most likely to have a bug or a rare unhandled corner case.

Suggestions:

  • In most modern operating systems, Stata's memory setting can be configured with set max_memory ., which allows Stata to manage memory dynamically as needed. While this is unlikely to be the cause of the issue since it does not seem memory-related, it is good to be aware of this setting.
  • I understand that subsetting the observations is not a valid approach as it generates different results. But does this command work if you run one balance variable at a time? The way you specify the command, each balance variable is analyzed independently in this command, so if you were to run one balance variable at a time, you would still get the same results. The only drawback is that you would need to combine the outputted LaTeX files after all estimations are completed. Not optimal, but if it works, it is likely to be your quickest solution to this issue.

Let me know what these comments make you think or what these suggestions teaches you. Happy to keep working with you until this is resolved. However, it might also be related to Stata (especially on Linux) where I would not be able to help with a solution.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants