-
-
Notifications
You must be signed in to change notification settings - Fork 0
/
gpt-4o_prompt.txt
23 lines (12 loc) · 7.22 KB
/
gpt-4o_prompt.txt
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
GPT-4o (https://chatgpt.com/)
Preciso criar o seguinte cenário em Python: 1) gerar dados aleatórios de produtos (estoque), filiais (cada filial tem seus produtos/estoque) e vendas (atualização do estoque de cada filial); 2) Todos esses dados devem ser gravados e consultados de um banco de dados MongoDB; 3) Simular consultas simultâneas de estoque, atualizações de inventário (produtos em estoque de cada filial) e adição de novas filiais (com seus respectivos produtos e vendas); e 4) Registrar o desempenho dessas consultas ao banco de dados.
Eu precisaria de mais campos para os dados aleatórios de produtos (estoque), filiais (cada filial tem seus produtos/estoque) e vendas (atualização do estoque de cada filial). Poderia me sugerir mais campos gerados com o faker para cada uma dessas entidades, por favor?
Please, make some adjustments: 1) create and use a method to evenlly add new products during simulation (the addition must be eventual, by random parameter that decide if it must me added or no); 2) Show as performance indicators the total executions time and the mean time of reading and writing data of each operation during simulation; and 3) Make the simulations run 10 times (a parameter must be defined to the running times) and consider as performance indicators the average number of each indicator.
How can I identify the number of core processors in the machine running the script, independent of operational system, and consider it to define the percentage of cores i want to use in simulate_operations to hold the parallel processes in ThreadPoolExecutor?
Please, make some new adjustments: 1) Make add_store add new stores evenlly, like add_product does (by random parameter that decide if it must me added or no); 2) Create indicators for total number of query_stock, update_inventory, add_store and add_product efectivelly made in each simulation execution, and a total indicator for the final indicators group; 3) Make each measure_performance simulation execution print the indicators of it execution at the final of each execution; 4) Add a docstring (following Google's docstring format) to each method, describing its purposes, parameters received and what they return; 5) Add to the code the methods parameters and methods returns typing; and 6) Add to the code a local log, using Python logging, that saves each output pre printing for each output a timestamp in the format "dd/MM/YYYY hh:mm:ss.ffff" and the type of logging (ERROR, INFO, WARN, etc.).
Please, make new adjustments: 1) adjust the logging.basicConfig to identify the logging level, upper case it and use it in the message prefix, so the log_message method could be unecessary, and the native Python logging should be directly used; 2) make logging print just to the file, and configure the Python logging file to have a 5MB max size, then, when max size is reached, it must overwrite it; 3) Hide the terminal prints and use the python tqdm module to exhibit each simulation execution of performance measurement; 4) Make the num_operations parameter used in simulate_operations a parameter of measure_performance, that must defaults to 1000; 5) The number of fake products generated in generate_fake_store must be random, between a minimum and a maximum value received by parameter; 6) Use Matplotlib to generate a comparative graph, to be saved in file, of each execution reading, writing and total time facing the total number of operations made; 7) Then, using Matplpotlib generate a final comparative graph, to be saved in file, considering each execution graph; and 8) Before the script start execution save in the log and print to the terminal the following general informations (make a function to group and centralize it): datetime it started, database type and connection details, the machine operational system, the CPU vendor, number of CPU cores and memory RAM available, the frequency operation of CPU and RAM memory, and the simulation and performance parameters defined.
Please, make some more adjustments: 1) the tqdm must consider the num_operations parameter of simulate_operations to show each simulation execution progress; 2) the initial number of sales simulated at the num_sales parameter of insert_sales used in measure_performance must be received by parameter in measure_performance; and 3) the initial number of stores simulated at the num_stores parameter of insert_stores used in measure_performance must be received by parameter in measure_performance.
Please, create and return for me just the code for test the database conection and, if its ok, teste if the database and the collections exist, and if does not exist create them.
Please, alterate the simulate_operations method to generate just the following matplotlib graphs: 1) A scatter plot chart comparing the reading time of each execution; 2) A scatter plot chart comparing the writing time of each execution; 3) A scatter plot chart comparing the total execution time of each execution; 4) A bar chart with average line comparing the reading time of each execution; 5) A bar chart with average line comparing the writing time of each execution; 6) A bar chart with average line comparing the total execution time of each execution. Every chart bar must have the total value at the top of the bar, adn every average line must have the average numbers at the line dots. Every time performance must be shown in miliseconds in the logging and in the matplotlib charts.
Please, make the following adjustments: 1) Before start the simulation, print to the log and to the terminal the number of cores (max_workers) to be used, the number of runs and the number of operations in each run; 2) To each tqdm "Operations Progress" show the run number in each progress (e.g., "Operation Progress 1"); 3) Make every chart image be generated to a folder named "executions" and to a subfolder named with the datetime when the script execution started; 4) For each run generate a bar chart comparing the reading and the writing time; 5) For the total execution generate a bar chart comparing the reading and the writing time for each run; 6) For the total execution generate a candle stick chart comparing the reading and the writing time for each run; 7) Make all the vertical axis of individual runs start on 0 ms and mark every 5 ms; 8) Make all the vertical axis of total executions start on 0 ms and mark every 250 ms; 9) Make every horizontal axis that register runs mark every run; 10) Make every horizontal axis that register operations start on 0 and mark every group of total operations divided by 10; 11) In the subfolder, at the same level of the charts, generate a .txt to save the logging output informations of each execution; 12) Every chart average line must consider the values of each bar to be drawn; and 13) Create a variable to define the charts width, that must start in 600 px.
Please, make the following adjustments: 1) The read vs writing run charts must have horizontal axis starting on 0 and mark every group of total operations divided by 10; 2) All charts of the same kind must have the same max value and subdivisions of the vertical axis, identified among the values of those chart vertical values; and 3) All charts of the same kind must have the same max value and subdivisions of the horizontal axis, identified among the values of those chart horizontal values.