diff --git a/README.md b/README.md index 6193f59..667d642 100644 --- a/README.md +++ b/README.md @@ -1,4 +1,4 @@ -# TinyTroupe 🧑‍⚕️🧑‍💼🧑‍💻🧑‍🔧 +# TinyTroupe 🤠🤓🥸🧐 *LLM-powered multiagent persona simulation for imagination enhancement and business insights.*

@@ -13,67 +13,40 @@ - **Product and project management:** TinyTroupe can **read project or product proposals** and **give feedback** from the perspective of **specific personas** (e.g., physicians, lawyers, and knowledge workers in general). - **Brainstorming:** TinyTroupe can simulate **focus groups** and deliver great product feedback at a fraction of the cost! -In all of the above, and many others, we hope users can **gain insights** about their domain of interest, and thus make better decisions. +In all of the above, and many others, we hope experimenters can **gain insights** about their domain of interest, and thus make better decisions. We are releasing *TinyTroupe* at a relativelly early stage, with considerable work still to be done, because we are looking for feedback and contributions to steer development in productive directions. We are particularly interested in finding new potential use cases, for instance in specific industries. +>[!NOTE] +>🚧 **WORK IN PROGRESS: expect frequent changes**. +>TinyTroupe is an ongoing research project, still under **very significant development** and requiring further **tidying up**. In particular, the API is still subject to frequent changes. Experimenting with API variations is essential to shape it correctly, but we are working to stabilize it and provide a more consistent and friendly experience over time. We appreciate your patience and feedback as we continue to improve the library. - - ->[!WARNING] ->⚖️ **Read the legal disclaimer:** +>[!CAUTION] +>⚖️ **Read the LEGAL DISCLAIMER.** >TinyTroupe is for research and simulation only. You are fully responsible for any use you make of the generated outputs. Various important additional legal considerations apply and constrain its use, please read the full [Legal Disclaimer](#legal-disclaimer) section below before using TinyTroupe. ->[!NOTE] ->🚧 **API stability**: ->TinyTroupe is an ongoing research project, still under very significant development, and the API is still subject to frequent changes. We are working to stabilize the API and provide a more consistent and user-friendly experience. We appreciate your patience and feedback as we continue to improve the library. - -

- 🗺️ -
- - -
-
- 📚 Examples -
-
- 🛠️ Pre-requisites -
-
- 📥 Installation -
-
- 🌟 Principles -
-
- 🏗️ Project Structure -
-
- 📖 Using the Library -
-
- 🤝 Contributing -
-
- 🙏 Acknowledgements -
-
- 📜 Citing TinyTroupe -
-
- ⚖️ Legal Disclaimer -
-
- ™️ Trademarks -
-
+## Contents + +- 📚 [Examples](#examples) +- 🛠️ [Pre-requisites](#pre-requisites) +- 📥 [Installation](#installation) +- 🌟 [Principles](#principles) +- 🏗️ [Project Structure](#project-structure) +- 📖 [Using the Library](#using-the-library) +- 🤝 [Contributing](#contributing) +- 🙏 [Acknowledgements](#acknowledgements) +- 📜 [Citing TinyTroupe](#how-to-cite-tinytroupe) +- ⚖️ [Legal Disclaimer](#legal-disclaimer) +- ™️ [Trademarks](#trademarks) ## Examples -To get a sense of what TinyTroupe can do, here are some examples of its use. These examples are available in the `examples/` folder, and you can eihte inspect the pre-compiled Jupyter notebooks or run them yourself locally. +To get a sense of what TinyTroupe can do, here are some examples of its use. These examples are available in the [examples/](./examples/) folder, and you can either inspect the pre-compiled Jupyter notebooks or run them yourself locally. Notice the interactive nature of TinyTroupe experiments -- just like you use Jupyter notebooks to interact with data, you can use TinyTroupe to interact with simulated people and environments, for the purpose of gaining insights. + +>[!NOTE] +> Currently, simulation outputs are better visualized against dark backgrounds, so we recommend using a dark theme in your Jupyter notebook client. ### 🧪**Example 1** *(from [interview_with_customer.ipynb](./examples/interview_with_customer.ipynb))* Let's begin with a simple customer interview scenario, where a business consultant approaches a banker: @@ -114,16 +87,17 @@ After running a simulation, we can extract the results in a machine-readable man An example.

-You can find other examples in the `examples/` folder. +You can find other examples in the [examples/](./examples/) folder. ## Pre-requisites To run the library, you need: - - Python 3.10 or higher. + - Python 3.10 or higher. We'll assume you are using [Anaconda](https://docs.anaconda.com/anaconda/install/), but you can use other Python distributions. - Access to Azure OpenAI Service or Open AI GPT-4 APIs. You can get access to the Azure OpenAI Service [here](https://azure.microsoft.com/en-us/products/ai-services/openai-service), and to the OpenAI API [here](https://platform.openai.com/). * For Azure OpenAI Service, you will need to set the `AZURE_OPENAI_KEY` and `AZURE_OPENAI_ENDPOINT` environment variables to your API key and endpoint, respectively. * For OpenAI, you will need to set the `OPENAI_API_KEY` environment variable to your API key. + - By default, TinyTroupe `config.ini` is set to use some specific API, model and related parameters. You can customize these values by including your own `config.ini` file in the same folder as the program or notebook you are running. An example of a `config.ini` file is provided in the [examples/](./examples/) folder. >[!IMPORTANT] > **Content Filters**: To ensure no harmful content is generated during simulations, it is strongly recommended to use content filters whenever available at the API level. In particular, **if using Azure OpenAI, there's extensive support for content moderation, and we urge you to use it.** For details about how to do so, please consult [the corresponding Azure OpenAI documentation](https://learn.microsoft.com/en-us/azure/ai-services/openai/concepts/content-filter). If content filters are in place, and an API call is rejected by them, the library will raise an exception, as it will be unable to proceed with the simulation at that point. @@ -131,34 +105,53 @@ To run the library, you need: ## Installation -Currently, the official recommended way to install the library is directly from this repository, not PyPI. - -### From the GitHub repository -To install the library directly from the GitHub repository:: - +**Currently, the officially recommended way to install the library is directly from this repository, not PyPI.** You can follow these steps: + +1. If Conda is not installed, you can get it from [here](https://docs.anaconda.com/anaconda/install/). You can also use other Python distributions, but we'll assume Conda here for simplicity. +2. Create a new Python environment: + ```bash + conda create -n tinytroupe python=3.10 + ``` +3. Activate the environment: + ```bash + conda activate tinytroupe + ``` +4. Make sure you have eihter Azure OpenAI or OpenAI API keys set as environment variables, as described in the [Pre-requisites](#pre-requisites) section. +5. Install the library **from this repository, not PyPI**: + ```bash + pip install git+https://github.com/microsoft/tinytroupe.git + ``` +6. You can now use TinyTroupe to create your simulations 🥳. If you want to run the examples in the + [examples/](./examples/) folder or modify TinyTroupe itself, however, you should clone the repository as described below. + + +### Cloning the repository: examples and local development + +If you want to run the examples in the [examples/](./examples/) folder, you can simply clone the repository and run the examples directly from there: ```bash -$ pip install git+https://github.com/microsoft/tinytroupe.git +git clone https://github.com/microsoft/tinytroupe +cd tinytroupe ``` -### From the local repository -If you want to make changes to the library and test them locally, you can also of course clone the repository first: - +Further, if you want to modify TinyTroupe itself, you can install it from the local clone: ```bash -$ git clone https://github.com/microsoft/tinytroupe -$ cd tinytroupe -$ pip install . +pip install . ``` +or, in editable mode (i.e., changes to the code will be reflected immediately): +```bash +pip install -e . +``` ## Principles -Recently, we have seen LLMs used to simulate people (such as [this](https://github.com/joonspk-research/generative_agents)), but largely in a “game-like” setting for contemplative or entertainment purposes. What if we try instead to simulate people to support productivity tasks? TinyTroupe is our attempt. To do so, it follows these principles: +Recently, we have seen LLMs used to simulate people (such as [this](https://github.com/joonspk-research/generative_agents)), but largely in a “game-like” setting for contemplative or entertainment purposes. There are also libraries for building multiagent systems for proble-solving and assitive AI, like [Autogen](https://microsoft.github.io/) and [Crew AI](https://docs.crewai.com/). What if we combine these ideas and simulate people to support productivity tasks? TinyTroupe is our attempt. To do so, it follows these principles: 1. **Programmatic**: agents and environments are defined programmatically (in Python and JSON), allowing very flexible uses. They can also thus underpin other software apps! - 2. **Analytical**: meant to improve our understanding of people, users and society. Unlike entertainment applications, this is one aspect that is critical for business and productivity use cases. + 2. **Analytical**: meant to improve our understanding of people, users and society. Unlike entertainment applications, this is one aspect that is critical for business and productivity use cases. This is also why we recommend using Jupyter notebooks for simulations, just like one uses them for data analysis. 3. **Persona-based**: agents are meant to be archetypical representation of people; for greater realism and control, detailed specification of such personas is encouraged: age, occupation, skills, tastes, opinions, etc. 4. **Multiagent**: allows multiagent interaction under well-defined environmental constraints. 5. **Utilities-heavy**: provides many mechanisms to facilitate specifications, simulations, extractions, reports, validations, etc. This is one area in which dealing with *simulations* differs significantly from *assistance* tools. - 6. **Experiment-oriented**: simulations are defined, run, analyzed and refined by an *experimenter* iteratively; suitable experimentation tools are thus provided. + 6. **Experiment-oriented**: simulations are defined, run, analyzed and refined by an *experimenter* iteratively; suitable experimentation tools are thus provided. *See one of our [previous paper](https://www.microsoft.com/en-us/research/publication/the-case-for-experiment-oriented-computing/) for more on this.* Together, these are meant to make TinyTroupe a powerful and flexible **imagination enhancement tool** for business and productivity scenarios. @@ -182,7 +175,6 @@ One common source of confusion is to think all such AI agents are meant for assi The project is structured as follows: - `/tinytroupe`: contains the Python library itself. In particular: * `/tinytroupe/prompts` contains the prompts used to call the LLMs. - * `/tinytroupe/microsoft` contains elements specific to the _public_ Microsoft ecosystem. - `/tests`: contains the unit tests for the library. You can use the `test.bat` script to run these. - `/examples`: contains examples that show how to use the library, mainly using Jupyter notebooks (for greater readability), but also as pure Python scripts. - `/data`: any data used by the examples or the library. @@ -243,7 +235,7 @@ lisa.define_several("personality_traits", `TinyTroupe` also provides a clever way to obtain new agents, using LLMs to generate their specification for you, through the `TinyPersonFactory` class. ```python -from tinytroupe.personfactory import TinyPersonFactory +from tinytroupe.factory import TinyPersonFactory factory = TinyPersonFactory("Create a Brazilian person that is a doctor, like pets and the nature and love heavy metal.") person = factory.generate_person() @@ -307,10 +299,11 @@ TinyTroupe provides a number of utilities and conveniences to help you create si - `TinyPersonFactory`: helps you generate new `TinyPerson`s using LLMs. - `TinyTool`: simulated tools that can be used by `TinyPerson`s. - `TinyStory`: helps you create and manage the story told through simulations. - - `InteractionResultsExtractor` and `InteractionResultsReducer`: extract and reduce the results of interactions between agents. - - `TinyPersonChecker`: helps you validate the behavior of your `TinyPerson`s. + - `TinyPersonValidator`: helps you validate the behavior of your `TinyPerson`s. + - `ResultsExtractor` and `ResultsReducer`: extract and reduce the results of interactions between agents. - ... and more ... +In general, elements that represent simulated entities or complementary mechanisms are prefixed with `Tiny`, while those that are more infrastructural are not. This is to emphasize the simulated nature of the elements that are part of the simulation itself. ### Caching Calling LLM APIs can be expensive, thus caching strategies are important to help reduce that cost. @@ -340,7 +333,7 @@ when a new call comes and is identical to a previous one, the cached value is re ### Config.ini -The `config.ini` file contains various parameters that can be used to customize the behavior of the library, such as model parameters and logging level. Please pay special attention to `API_TYPE` parameter, which defines whether you are using the Azure OpenAI Service or the OpenAI API. +The `config.ini` file contains various parameters that can be used to customize the behavior of the library, such as model parameters and logging level. Please pay special attention to `API_TYPE` parameter, which defines whether you are using the Azure OpenAI Service or the OpenAI API. We provide an example of a `config.ini` file, [./examples/config.ini](./examples/config.ini), which you can use as a template for your own, or just modify to run the examples. ## Contributing @@ -357,9 +350,9 @@ For more information see the [Code of Conduct FAQ](https://opensource.microsoft. contact [opencode@microsoft.com](mailto:opencode@microsoft.com) with any additional questions or comments. ### What and How to Contribute -We need all sorts of things, like: - - New interesting use cases demonstrations, or even just domain-specific application ideas. - If you are a domain expert in some area that could benefit from TinyTroupe, we'd love to hear from you. +We need all sorts of things, but we are looking mainly for new interesting use cases demonstrations, or even just domain-specific application ideas. If you are a domain expert in some area that could benefit from TinyTroupe, we'd love to hear from you. + +Beyond that, many other aspects can be improved, such as: - Memory mechanisms. - Data grounding mechanisms. - Reasoning mechanisms. @@ -370,7 +363,7 @@ We need all sorts of things, like: Please note that anything that you contribute might be released as open-source (under MIT license). If you would like to make a contribution, please try to follow these general guidelines: - - **Tiny-everything**: If you are implementing a user-facing element (e.g., an agent or environment type), and it sounds good, call your new _X_ as _TinyX_ :-) + - **Tiny naming convention**: If you are implementing a experimenter-facing simulated element (e.g., an agent or environment type) or closely related (e.g., agent factories, or content enrichers), and it sounds good, call your new *XYZ* as *TinyXYZ* :-) On the other hand, auxiliary and infrastructural mechanisms should not start with the "Tiny" prefix. The idea is to emphasize the simulated nature of the elements that are part of the simulation itself. - **Tests:** If you are writing some new mechanism, please also create at least a unit test `tests/unit/`, and if you can a functional scenario test (`tests/scenarios/`). - **Demonstrations:** If you'd like to demonstrate a new scenario, please design it preferably as a new Jupyter notebook within `examples/`. - **Microsoft:** If you are implementing anything that is Microsoft-specific and non-confidential, please put it under a `.../microsoft/` folder. @@ -399,8 +392,7 @@ Other special contributions were made by: ## Citing TinyTroupe -We are working in an introductory paper that will be the official academic citation for TinyTroupe. In the meantime, please just cite this repository including the core team members as authors. For instance: - +We are working on an introductory paper that will be the official academic citation for TinyTroupe. In the meantime, please just cite this repository including the core team members as authors. For instance: >Paulo Salem, Christopher Olsen, Paulo Freire, Yi Ding, Prerit Saxena (2024). **TinyTroupe: LLM-powered multiagent persona simulation for imagination enhancement and business insights.** [Computer software]. GitHub repository. https://github.com/microsoft/tinytroupe diff --git a/data/extractions/appartment_rent_ad.extraction.json b/data/extractions/appartment_rent_ad.extraction.json index d09db7d..f45b53c 100644 --- a/data/extractions/appartment_rent_ad.extraction.json +++ b/data/extractions/appartment_rent_ad.extraction.json @@ -2,7 +2,7 @@ "agent_extractions": {}, "world_extraction": { "Focus group": { - "ad_copy": "Discover the perfect blend of modern living and convenience in this beautifully renovated apartment. Located in a quiet spot on a main avenue, this spacious one-bedroom home features an elegant integrated kitchen and living room, complete with stylish 60s-style chairs and high-end appliances. Enjoy the luxury of two tastefully designed bathrooms and a versatile home office space. With wood-like floors and a new washer-dryer, this fully furnished apartment is ready for you to move in. Just steps away from public transport, shops, and restaurants, and near the country's top Medicine School, it's an ideal choice for students and professionals alike. Experience the tranquility and charm of this unique living space with high-quality photos and a virtual tour. Don't miss out on this opportunity to make it your new home!" + "ad_copy": "Spacious, Modern Living in a Quiet, Convenient Location! Discover this beautifully renovated apartment with an open design, perfect for students and young professionals. Enjoy the convenience of nearby amenities like supermarkets, gyms, and public transport, while relishing the tranquility of a quiet setting. Ideal for remote work with a home office space, and fully furnished with modern appliances. Don't miss out on this opportunity\u2014contact us today for a virtual tour and special incentives for early renters!" } } } \ No newline at end of file diff --git a/data/extractions/synthetic_data_generation.out.csv b/data/extractions/synthetic_data_generation.out.csv index ed31853..604a6cc 100644 --- a/data/extractions/synthetic_data_generation.out.csv +++ b/data/extractions/synthetic_data_generation.out.csv @@ -1,5 +1,6 @@ author,content USER,Message each other to get work done. -Liam,"Hi Samantha, let's coordinate on our tasks to ensure we complete them efficiently. How would you like to proceed?" -Samantha,"Hi Liam, let's start by listing our tasks and prioritizing them. We can then assign responsibilities and set deadlines to ensure everything is completed efficiently. How does that sound?" -Liam,"That sounds like a great plan, Samantha. Let's start by listing our tasks. Once we have them all, we can prioritize and assign responsibilities." +Lila,Let's make sure we are all on the same page and communicate effectively to get our tasks done. +Ethan,"Absolutely, Lila. Clear communication is key to ensuring we all stay aligned and productive." +Lila,"Thanks, Ethan. Let's set up regular check-ins to keep everyone aligned and address any communication gaps." +Ethan,"Great idea, Lila. Regular check-ins will definitely help us stay on track and address any issues promptly. Let's schedule them." diff --git a/data/extractions/wordprocessor/Document/Resume.Lisa.docx b/data/extractions/wordprocessor/Document/Resume.Lisa.docx new file mode 100644 index 0000000..4b17c6a Binary files /dev/null and b/data/extractions/wordprocessor/Document/Resume.Lisa.docx differ diff --git a/data/extractions/wordprocessor/Document/Resume.Lisa.json b/data/extractions/wordprocessor/Document/Resume.Lisa.json new file mode 100644 index 0000000..052ac5a --- /dev/null +++ b/data/extractions/wordprocessor/Document/Resume.Lisa.json @@ -0,0 +1,5 @@ +{ + "title": "Resume", + "content": "# Lisa's Resume\n\n## Objective\nI am a highly motivated and detail-oriented data science professional seeking a challenging position in the field of data science. My goal is to leverage my extensive skills in machine learning, data analysis, and statistical modeling to contribute to innovative and impactful projects. I am particularly interested in roles that allow me to work on cutting-edge technologies and collaborate with cross-functional teams to drive data-driven decision-making and enhance business outcomes.\n\n## Skills\n\n### Technical Skills\n- **Programming Languages:** Proficient in Python, with extensive experience in using libraries such as pandas for data manipulation, scikit-learn for machine learning, and TensorFlow for deep learning applications. Additionally, I have hands-on experience with Azure ML for deploying and managing machine learning models in the cloud.\n- **Data Analysis Tools:** Familiar with SQL for database querying and data extraction, as well as Power BI for creating interactive data visualizations and dashboards to communicate insights effectively to stakeholders.\n- **Machine Learning and AI:** Strong foundation in machine learning algorithms, including supervised and unsupervised learning, natural language processing, and computer vision. Experienced in building, training, and evaluating models to solve complex business problems.\n- **Analytical and Problem-Solving Skills:** Demonstrated ability to analyze large datasets, identify patterns and trends, and develop actionable insights. Skilled in using statistical techniques to validate hypotheses and support data-driven decision-making.\n\n### Soft Skills\n- **Communication:** Excellent verbal and written communication skills, with the ability to present complex technical concepts to non-technical audiences. Experienced in writing technical reports and documentation.\n- **Collaboration:** Proven track record of working effectively in team environments, collaborating with colleagues from diverse backgrounds to achieve common goals. Strong interpersonal skills and the ability to build positive relationships with stakeholders.\n- **Adaptability:** Quick learner with the ability to adapt to new technologies and methodologies. Open to feedback and committed to continuous professional development.\n\n## Experience\n\n### Data Scientist at Microsoft (June 2018 - Present)\n- **User Behavior Analysis:** Conducted in-depth analysis of user behavior and feedback data to identify areas for improvement in search results. Developed data-driven strategies to enhance user experience and increase engagement.\n- **Machine Learning Model Development:** Built and tested machine learning models for various search scenarios, including personalized recommendations and query understanding. Collaborated with engineering teams to integrate models into production systems.\n- **Privacy and Security Compliance:** Ensured that all data and models adhered to privacy and security policies, working closely with legal and compliance teams to address any potential risks. Implemented best practices for data governance and model transparency.\n\n### Data Analyst Intern at TechCorp (June 2017 - May 2018)\n- Assisted in the development of data pipelines for processing and analyzing large datasets. Conducted exploratory data analysis to uncover insights and support business decision-making.\n- Created interactive dashboards and reports using Power BI to visualize key performance indicators and track progress against business objectives.\n\n## Education\n\n### Bachelor's Degree in Computer Science\n- **University of California, Berkeley (2014 - 2018)**\n - Relevant Coursework: Data Structures and Algorithms, Machine Learning, Database Systems, Artificial Intelligence, Statistical Methods for Data Science\n - Honors: Dean's List (2016, 2017), Member of the Computer Science Honor Society\n\n## Certifications\n- **Certified Data Scientist (CDS)**\n- **Microsoft Certified: Azure Data Scientist Associate**\n\n## Projects\n\n### Predictive Analytics for E-commerce\n- Developed a predictive analytics model to forecast sales trends and optimize inventory management for an e-commerce platform. Utilized time series analysis and regression techniques to improve demand forecasting accuracy.\n\n### Sentiment Analysis of Social Media Data\n- Implemented a sentiment analysis model to analyze customer feedback on social media platforms. Used natural language processing techniques to classify sentiments and provide actionable insights for marketing strategies.\n\n## Interests\n\n- **Artificial Intelligence and Machine Learning:** Passionate about exploring the latest advancements in AI and machine learning, with a particular interest in deep learning and neural networks. Regularly attend industry conferences and workshops to stay updated on emerging trends.\n- **Natural Language Processing and Conversational Agents:** Enthusiastic about the potential of NLP and conversational agents to transform human-computer interactions. Enjoy experimenting with chatbot development and language models.\n- **Cooking and Trying New Recipes:** Avid home cook who loves experimenting with new recipes and cuisines. Enjoys hosting dinner parties and sharing culinary creations with friends and family.\n- **Playing the Piano:** Dedicated pianist with over 10 years of experience. Enjoys playing classical and contemporary pieces, as well as composing original music.\n- **Watching Movies:** Film enthusiast with a particular love for comedies and thrillers. Enjoys analyzing film techniques and storytelling methods.\n\n## References\nAvailable upon request.\n\n---\n\n### Table: Technical Skills Proficiency\n\n| Skill | Proficiency Level | Years of Experience |\n|-------------------------------|-------------------|---------------------|\n| Python | Expert | 5 |\n| pandas | Expert | 4 |\n| scikit-learn | Advanced | 4 |\n| TensorFlow | Advanced | 3 |\n| Azure ML | Intermediate | 2 |\n| SQL | Intermediate | 3 |\n| Power BI | Intermediate | 2 |\n\n### List: Key Achievements\n- Successfully improved search result relevance by 15% through data-driven analysis and model optimization.\n- Developed a machine learning model that reduced customer churn by 10% for a major client.\n- Led a cross-functional team to implement a data governance framework, ensuring compliance with industry standards.\n\nThis enriched resume provides a comprehensive overview of Lisa's qualifications, experience, and interests, making it a compelling document for potential employers.", + "author": "Lisa" +} \ No newline at end of file diff --git a/data/extractions/wordprocessor/Document/Resume.Lisa.md b/data/extractions/wordprocessor/Document/Resume.Lisa.md new file mode 100644 index 0000000..8303d95 --- /dev/null +++ b/data/extractions/wordprocessor/Document/Resume.Lisa.md @@ -0,0 +1,79 @@ +# Lisa's Resume + +## Objective +I am a highly motivated and detail-oriented data science professional seeking a challenging position in the field of data science. My goal is to leverage my extensive skills in machine learning, data analysis, and statistical modeling to contribute to innovative and impactful projects. I am particularly interested in roles that allow me to work on cutting-edge technologies and collaborate with cross-functional teams to drive data-driven decision-making and enhance business outcomes. + +## Skills + +### Technical Skills +- **Programming Languages:** Proficient in Python, with extensive experience in using libraries such as pandas for data manipulation, scikit-learn for machine learning, and TensorFlow for deep learning applications. Additionally, I have hands-on experience with Azure ML for deploying and managing machine learning models in the cloud. +- **Data Analysis Tools:** Familiar with SQL for database querying and data extraction, as well as Power BI for creating interactive data visualizations and dashboards to communicate insights effectively to stakeholders. +- **Machine Learning and AI:** Strong foundation in machine learning algorithms, including supervised and unsupervised learning, natural language processing, and computer vision. Experienced in building, training, and evaluating models to solve complex business problems. +- **Analytical and Problem-Solving Skills:** Demonstrated ability to analyze large datasets, identify patterns and trends, and develop actionable insights. Skilled in using statistical techniques to validate hypotheses and support data-driven decision-making. + +### Soft Skills +- **Communication:** Excellent verbal and written communication skills, with the ability to present complex technical concepts to non-technical audiences. Experienced in writing technical reports and documentation. +- **Collaboration:** Proven track record of working effectively in team environments, collaborating with colleagues from diverse backgrounds to achieve common goals. Strong interpersonal skills and the ability to build positive relationships with stakeholders. +- **Adaptability:** Quick learner with the ability to adapt to new technologies and methodologies. Open to feedback and committed to continuous professional development. + +## Experience + +### Data Scientist at Microsoft (June 2018 - Present) +- **User Behavior Analysis:** Conducted in-depth analysis of user behavior and feedback data to identify areas for improvement in search results. Developed data-driven strategies to enhance user experience and increase engagement. +- **Machine Learning Model Development:** Built and tested machine learning models for various search scenarios, including personalized recommendations and query understanding. Collaborated with engineering teams to integrate models into production systems. +- **Privacy and Security Compliance:** Ensured that all data and models adhered to privacy and security policies, working closely with legal and compliance teams to address any potential risks. Implemented best practices for data governance and model transparency. + +### Data Analyst Intern at TechCorp (June 2017 - May 2018) +- Assisted in the development of data pipelines for processing and analyzing large datasets. Conducted exploratory data analysis to uncover insights and support business decision-making. +- Created interactive dashboards and reports using Power BI to visualize key performance indicators and track progress against business objectives. + +## Education + +### Bachelor's Degree in Computer Science +- **University of California, Berkeley (2014 - 2018)** + - Relevant Coursework: Data Structures and Algorithms, Machine Learning, Database Systems, Artificial Intelligence, Statistical Methods for Data Science + - Honors: Dean's List (2016, 2017), Member of the Computer Science Honor Society + +## Certifications +- **Certified Data Scientist (CDS)** +- **Microsoft Certified: Azure Data Scientist Associate** + +## Projects + +### Predictive Analytics for E-commerce +- Developed a predictive analytics model to forecast sales trends and optimize inventory management for an e-commerce platform. Utilized time series analysis and regression techniques to improve demand forecasting accuracy. + +### Sentiment Analysis of Social Media Data +- Implemented a sentiment analysis model to analyze customer feedback on social media platforms. Used natural language processing techniques to classify sentiments and provide actionable insights for marketing strategies. + +## Interests + +- **Artificial Intelligence and Machine Learning:** Passionate about exploring the latest advancements in AI and machine learning, with a particular interest in deep learning and neural networks. Regularly attend industry conferences and workshops to stay updated on emerging trends. +- **Natural Language Processing and Conversational Agents:** Enthusiastic about the potential of NLP and conversational agents to transform human-computer interactions. Enjoy experimenting with chatbot development and language models. +- **Cooking and Trying New Recipes:** Avid home cook who loves experimenting with new recipes and cuisines. Enjoys hosting dinner parties and sharing culinary creations with friends and family. +- **Playing the Piano:** Dedicated pianist with over 10 years of experience. Enjoys playing classical and contemporary pieces, as well as composing original music. +- **Watching Movies:** Film enthusiast with a particular love for comedies and thrillers. Enjoys analyzing film techniques and storytelling methods. + +## References +Available upon request. + +--- + +### Table: Technical Skills Proficiency + +| Skill | Proficiency Level | Years of Experience | +|-------------------------------|-------------------|---------------------| +| Python | Expert | 5 | +| pandas | Expert | 4 | +| scikit-learn | Advanced | 4 | +| TensorFlow | Advanced | 3 | +| Azure ML | Intermediate | 2 | +| SQL | Intermediate | 3 | +| Power BI | Intermediate | 2 | + +### List: Key Achievements +- Successfully improved search result relevance by 15% through data-driven analysis and model optimization. +- Developed a machine learning model that reduced customer churn by 10% for a major client. +- Led a cross-functional team to implement a data governance framework, ensuring compliance with industry standards. + +This enriched resume provides a comprehensive overview of Lisa's qualifications, experience, and interests, making it a compelling document for potential employers. \ No newline at end of file diff --git a/docs/api/tinytroupe/agent.html b/docs/api/tinytroupe/agent.html index 5cad79a..e30433d 100644 --- a/docs/api/tinytroupe/agent.html +++ b/docs/api/tinytroupe/agent.html @@ -1283,9 +1283,9 @@

Module tinytroupe.agent

# Mental faculties ####################################################################################################################### -class Faculty(JsonSerializableRegistry): +class TinyMentalFaculty(JsonSerializableRegistry): """ - Represents an optional mental faculty of an agent. Mental faculties are the cognitive abilities that an agent has. + Represents a mental faculty of an agent. Mental faculties are the cognitive abilities that an agent has. """ def __init__(self, name: str, requires_faculties: list=None) -> None: @@ -1304,10 +1304,10 @@

Module tinytroupe.agent

self.requires_faculties = requires_faculties def __str__(self) -> str: - return f"Faculty: {self.name}" + return f"Mental Faculty: {self.name}" def __eq__(self, other): - if isinstance(other, Faculty): + if isinstance(other, TinyMentalFaculty): return self.name == other.name return False @@ -1336,7 +1336,7 @@

Module tinytroupe.agent

raise NotImplementedError("Subclasses must implement this method.") -class RecallFaculty(Faculty): +class RecallFaculty(TinyMentalFaculty): def __init__(self): super().__init__("Memory Recall") @@ -1415,7 +1415,7 @@

Module tinytroupe.agent

return textwrap.dedent(prompt) -class FilesAndWebGroundingFaculty(Faculty): +class FilesAndWebGroundingFaculty(TinyMentalFaculty): """ Allows the agent to access local files and web pages to ground its knowledge. """ @@ -1492,7 +1492,7 @@

Module tinytroupe.agent

return textwrap.dedent(prompt) -class ToolUse(Faculty): +class TinyToolUse(TinyMentalFaculty): """ Allows the agent to use tools to accomplish tasks. Tool usage is one of the most important cognitive skills humans and primates have as we know. @@ -1531,7 +1531,7 @@

Module tinytroupe.agent

# Memory mechanisms ####################################################################################################################### -class Memory(Faculty): +class TinyMemory(TinyMentalFaculty): """ Base class for different types of memory. """ @@ -1577,7 +1577,7 @@

Module tinytroupe.agent

-class EpisodicMemory(Memory): +class EpisodicMemory(TinyMemory): """ Provides episodic memory capabilities to an agent. Cognitively, episodic memory is the ability to remember specific events, or episodes, in the past. This class provides a simple implementation of episodic memory, where the agent can store and retrieve @@ -1691,7 +1691,7 @@

Module tinytroupe.agent

return omisssion_info + self.memory[-n:] -class SemanticMemory(Memory): +class SemanticMemory(TinyMemory): """ Semantic memory is the memory of meanings, understandings, and other concept-based knowledge unrelated to specific experiences. It is not ordered temporally, and it is not about remembering specific events or episodes. This class provides a simple implementation @@ -1863,7 +1863,7 @@

Args

Expand source code -
class EpisodicMemory(Memory):
+
class EpisodicMemory(TinyMemory):
     """
     Provides episodic memory capabilities to an agent. Cognitively, episodic memory is the ability to remember specific events,
     or episodes, in the past. This class provides a simple implementation of episodic memory, where the agent can store and retrieve
@@ -1978,8 +1978,8 @@ 

Args

Ancestors

Class variables

@@ -2046,176 +2046,18 @@

Methods

Inherited members

- -
-class Faculty -(name: str, requires_faculties: list = None) -
-
-

Represents an optional mental faculty of an agent. Mental faculties are the cognitive abilities that an agent has.

-

Initializes the mental faculty.

-

Args

-
-
name : str
-
The name of the mental faculty.
-
requires_faculties : list
-
A list of mental faculties that this faculty requires to function properly.
-
-
- -Expand source code - -
class Faculty(JsonSerializableRegistry):
-    """
-    Represents an optional mental faculty of an agent. Mental faculties are the cognitive abilities that an agent has.
-    """
-
-    def __init__(self, name: str, requires_faculties: list=None) -> None:
-        """
-        Initializes the mental faculty.
-
-        Args:
-            name (str): The name of the mental faculty.
-            requires_faculties (list): A list of mental faculties that this faculty requires to function properly.
-        """
-        self.name = name
-        
-        if requires_faculties is None:
-            self.requires_faculties = []
-        else:
-            self.requires_faculties = requires_faculties
-
-    def __str__(self) -> str:
-        return f"Faculty: {self.name}"
-    
-    def __eq__(self, other):
-        if isinstance(other, Faculty):
-            return self.name == other.name
-        return False
-    
-    def process_action(self, agent, action: dict) -> bool:
-        """
-        Processes an action related to this faculty.
-
-        Args:
-            action (dict): The action to process.
-        
-        Returns:
-            bool: True if the action was successfully processed, False otherwise.
-        """
-        raise NotImplementedError("Subclasses must implement this method.")
-    
-    def actions_definitions_prompt(self) -> str:
-        """
-        Returns the prompt for defining a actions related to this faculty.
-        """
-        raise NotImplementedError("Subclasses must implement this method.")
-
-    def actions_constraints_prompt(self) -> str:
-        """
-        Returns the prompt for defining constraints on actions related to this faculty.
-        """
-        raise NotImplementedError("Subclasses must implement this method.")
-
-

Ancestors

- -

Subclasses

- -

Methods

-
-
-def actions_constraints_prompt(self) ‑> str -
-
-

Returns the prompt for defining constraints on actions related to this faculty.

-
- -Expand source code - -
def actions_constraints_prompt(self) -> str:
-    """
-    Returns the prompt for defining constraints on actions related to this faculty.
-    """
-    raise NotImplementedError("Subclasses must implement this method.")
-
-
-
-def actions_definitions_prompt(self) ‑> str -
-
-

Returns the prompt for defining a actions related to this faculty.

-
- -Expand source code - -
def actions_definitions_prompt(self) -> str:
-    """
-    Returns the prompt for defining a actions related to this faculty.
-    """
-    raise NotImplementedError("Subclasses must implement this method.")
-
-
-
-def process_action(self, agent, action: dict) ‑> bool -
-
-

Processes an action related to this faculty.

-

Args

-
-
action : dict
-
The action to process.
-
-

Returns

-
-
bool
-
True if the action was successfully processed, False otherwise.
-
-
- -Expand source code - -
def process_action(self, agent, action: dict) -> bool:
-    """
-    Processes an action related to this faculty.
-
-    Args:
-        action (dict): The action to process.
-    
-    Returns:
-        bool: True if the action was successfully processed, False otherwise.
-    """
-    raise NotImplementedError("Subclasses must implement this method.")
-
-
-
-

Inherited members

- @@ -2237,7 +2079,7 @@

Args

Expand source code -
class FilesAndWebGroundingFaculty(Faculty):
+
class FilesAndWebGroundingFaculty(TinyMentalFaculty):
     """
     Allows the agent to access local files and web pages to ground its knowledge.
     """
@@ -2305,220 +2147,29 @@ 

Args

<TALK something> <CONSULT some document name> <THINK something about the retrieved document> - <TALK something> - DONE - ``` - - When deciding whether to use RECALL or CONSULT, you should consider whether you are looking for any information about some topic (use RECALL) or if you are looking for information from - specific documents (use CONSULT). To know if you have potentially relevant documents available, use LIST_DOCUMENTS first. - """ - - return textwrap.dedent(prompt)
- -

Ancestors

- -

Inherited members

- -
-
-class Memory -(name: str, requires_faculties: list = None) -
-
-

Base class for different types of memory.

-

Initializes the mental faculty.

-

Args

-
-
name : str
-
The name of the mental faculty.
-
requires_faculties : list
-
A list of mental faculties that this faculty requires to function properly.
-
-
- -Expand source code - -
class Memory(Faculty):
-    """
-    Base class for different types of memory.
-    """
-
-    def store(self, value: Any) -> None:
-        """
-        Stores a value in memory.
-        """
-        raise NotImplementedError("Subclasses must implement this method.")
-
-    def retrieve(self, first_n: int, last_n: int, include_omission_info:bool=True) -> list:
-        """
-        Retrieves the first n and/or last n values from memory. If n is None, all values are retrieved.
-
-        Args:
-            first_n (int): The number of first values to retrieve.
-            last_n (int): The number of last values to retrieve.
-            include_omission_info (bool): Whether to include an information message when some values are omitted.
-
-        Returns:
-            list: The retrieved values.
-        
-        """
-        raise NotImplementedError("Subclasses must implement this method.")
-
-    def retrieve_recent(self) -> list:
-        """
-        Retrieves the n most recent values from memory.
-        """
-        raise NotImplementedError("Subclasses must implement this method.")
-
-    def retrieve_all(self) -> list:
-        """
-        Retrieves all values from memory.
-        """
-        raise NotImplementedError("Subclasses must implement this method.")
-
-    def retrieve_relevant(self, relevance_target:str, top_k=5) -> list:
-        """
-        Retrieves all values from memory that are relevant to a given target.
-        """
-        raise NotImplementedError("Subclasses must implement this method.")
-
-

Ancestors

- -

Subclasses

- -

Methods

-
-
-def retrieve(self, first_n: int, last_n: int, include_omission_info: bool = True) ‑> list -
-
-

Retrieves the first n and/or last n values from memory. If n is None, all values are retrieved.

-

Args

-
-
first_n : int
-
The number of first values to retrieve.
-
last_n : int
-
The number of last values to retrieve.
-
include_omission_info : bool
-
Whether to include an information message when some values are omitted.
-
-

Returns

-
-
list
-
The retrieved values.
-
-
- -Expand source code - -
def retrieve(self, first_n: int, last_n: int, include_omission_info:bool=True) -> list:
-    """
-    Retrieves the first n and/or last n values from memory. If n is None, all values are retrieved.
-
-    Args:
-        first_n (int): The number of first values to retrieve.
-        last_n (int): The number of last values to retrieve.
-        include_omission_info (bool): Whether to include an information message when some values are omitted.
-
-    Returns:
-        list: The retrieved values.
-    
-    """
-    raise NotImplementedError("Subclasses must implement this method.")
-
-
-
-def retrieve_all(self) ‑> list -
-
-

Retrieves all values from memory.

-
- -Expand source code - -
def retrieve_all(self) -> list:
-    """
-    Retrieves all values from memory.
-    """
-    raise NotImplementedError("Subclasses must implement this method.")
-
-
-
-def retrieve_recent(self) ‑> list -
-
-

Retrieves the n most recent values from memory.

-
- -Expand source code - -
def retrieve_recent(self) -> list:
-    """
-    Retrieves the n most recent values from memory.
-    """
-    raise NotImplementedError("Subclasses must implement this method.")
-
-
-
-def retrieve_relevant(self, relevance_target: str, top_k=5) ‑> list -
-
-

Retrieves all values from memory that are relevant to a given target.

-
- -Expand source code - -
def retrieve_relevant(self, relevance_target:str, top_k=5) -> list:
-    """
-    Retrieves all values from memory that are relevant to a given target.
-    """
-    raise NotImplementedError("Subclasses must implement this method.")
-
-
-
-def store(self, value: Any) ‑> None -
-
-

Stores a value in memory.

-
- -Expand source code - -
def store(self, value: Any) -> None:
-    """
-    Stores a value in memory.
-    """
-    raise NotImplementedError("Subclasses must implement this method.")
+ <TALK something> + DONE + ``` + - When deciding whether to use RECALL or CONSULT, you should consider whether you are looking for any information about some topic (use RECALL) or if you are looking for information from + specific documents (use CONSULT). To know if you have potentially relevant documents available, use LIST_DOCUMENTS first. + """ + + return textwrap.dedent(prompt)
- - +

Ancestors

+

Inherited members

@@ -2527,7 +2178,7 @@

Inherited members

class RecallFaculty
-

Represents an optional mental faculty of an agent. Mental faculties are the cognitive abilities that an agent has.

+

Represents a mental faculty of an agent. Mental faculties are the cognitive abilities that an agent has.

Initializes the mental faculty.

Args

@@ -2540,7 +2191,7 @@

Args

Expand source code -
class RecallFaculty(Faculty):
+
class RecallFaculty(TinyMentalFaculty):
 
     def __init__(self):
         super().__init__("Memory Recall")
@@ -2620,18 +2271,18 @@ 

Args

Ancestors

Inherited members

@@ -2656,7 +2307,7 @@

Args

Expand source code -
class SemanticMemory(Memory):
+
class SemanticMemory(TinyMemory):
     """
     Semantic memory is the memory of meanings, understandings, and other concept-based knowledge unrelated to specific experiences.
     It is not ordered temporally, and it is not about remembering specific events or episodes. This class provides a simple implementation
@@ -2787,168 +2438,517 @@ 

Args

- ########################################################### - # IO - ########################################################### + ########################################################### + # IO + ########################################################### + + def _post_deserialization_init(self): + super()._post_deserialization_init() + + self.add_documents_paths(self.documents_paths) + self.add_web_urls(self.documents_web_urls)
+ +

Ancestors

+ +

Class variables

+
+
var suppress_attributes_from_serialization
+
+
+
+
+

Methods

+
+
+def add_documents_path(self, documents_path: str) ‑> None +
+
+

Adds a path to a folder with documents used for semantic memory.

+
+ +Expand source code + +
def add_documents_path(self, documents_path:str) -> None:
+    """
+    Adds a path to a folder with documents used for semantic memory.
+    """
+
+    if documents_path not in self.documents_paths:
+        self.documents_paths.append(documents_path)
+        new_documents = SimpleDirectoryReader(documents_path).load_data()
+        self._add_documents(new_documents, lambda doc: doc.metadata["file_name"])
+
+
+
+def add_documents_paths(self, documents_paths: list) ‑> None +
+
+

Adds a path to a folder with documents used for semantic memory.

+
+ +Expand source code + +
def add_documents_paths(self, documents_paths:list) -> None:
+    """
+    Adds a path to a folder with documents used for semantic memory.
+    """
+
+    if documents_paths is not None:
+        for documents_path in documents_paths:
+            self.add_documents_path(documents_path)
+
+
+
+def add_web_url(self, web_url: str) ‑> None +
+
+

Adds the data retrieved from the specified URL to documents used for semantic memory.

+
+ +Expand source code + +
def add_web_url(self, web_url:str) -> None:
+    """
+    Adds the data retrieved from the specified URL to documents used for semantic memory.
+    """
+    # we do it like this because the add_web_urls could run scrapes in parallel, so it is better
+    # to implement this one in terms of the other
+    self.add_web_urls([web_url])
+
+
+
+def add_web_urls(self, web_urls: list) ‑> None +
+
+

Adds the data retrieved from the specified URLs to documents used for semantic memory.

+
+ +Expand source code + +
def add_web_urls(self, web_urls:list) -> None:
+    """ 
+    Adds the data retrieved from the specified URLs to documents used for semantic memory.
+    """
+    filtered_web_urls = [url for url in web_urls if url not in self.documents_web_urls]
+    self.documents_web_urls += filtered_web_urls
+
+    if len(filtered_web_urls) > 0:
+        new_documents = SimpleWebPageReader(html_to_text=True).load_data(filtered_web_urls)
+        self._add_documents(new_documents, lambda doc: doc.id_)
+
+
+
+def list_documents_names(self) ‑> list +
+
+

Lists the names of the documents in memory.

+
+ +Expand source code + +
def list_documents_names(self) -> list:
+    """
+    Lists the names of the documents in memory.
+    """
+    if self.filename_to_document is not None:
+        return list(self.filename_to_document.keys())
+    else:
+        return []
+
+
+
+def retrieve_document_content_by_name(self, document_name: str) ‑> str +
+
+

Retrieves a document by its name.

+
+ +Expand source code + +
def retrieve_document_content_by_name(self, document_name:str) -> str:
+    """
+    Retrieves a document by its name.
+    """
+    if self.filename_to_document is not None:
+        doc = self.filename_to_document[document_name]
+        if doc is not None:
+            content = "SOURCE: " + document_name
+            content += "\n" + "CONTENT: " + doc.text[:10000] # TODO a more intelligent way to limit the content
+            return content
+        else:
+            return None
+    else:
+        return None
+
+
+
+

Inherited members

+ +
+
+class TinyMemory +(name: str, requires_faculties: list = None) +
+
+

Base class for different types of memory.

+

Initializes the mental faculty.

+

Args

+
+
name : str
+
The name of the mental faculty.
+
requires_faculties : list
+
A list of mental faculties that this faculty requires to function properly.
+
+
+ +Expand source code + +
class TinyMemory(TinyMentalFaculty):
+    """
+    Base class for different types of memory.
+    """
+
+    def store(self, value: Any) -> None:
+        """
+        Stores a value in memory.
+        """
+        raise NotImplementedError("Subclasses must implement this method.")
+
+    def retrieve(self, first_n: int, last_n: int, include_omission_info:bool=True) -> list:
+        """
+        Retrieves the first n and/or last n values from memory. If n is None, all values are retrieved.
+
+        Args:
+            first_n (int): The number of first values to retrieve.
+            last_n (int): The number of last values to retrieve.
+            include_omission_info (bool): Whether to include an information message when some values are omitted.
+
+        Returns:
+            list: The retrieved values.
+        
+        """
+        raise NotImplementedError("Subclasses must implement this method.")
+
+    def retrieve_recent(self) -> list:
+        """
+        Retrieves the n most recent values from memory.
+        """
+        raise NotImplementedError("Subclasses must implement this method.")
+
+    def retrieve_all(self) -> list:
+        """
+        Retrieves all values from memory.
+        """
+        raise NotImplementedError("Subclasses must implement this method.")
+
+    def retrieve_relevant(self, relevance_target:str, top_k=5) -> list:
+        """
+        Retrieves all values from memory that are relevant to a given target.
+        """
+        raise NotImplementedError("Subclasses must implement this method.")
+
+

Ancestors

+ +

Subclasses

+ +

Methods

+
+
+def retrieve(self, first_n: int, last_n: int, include_omission_info: bool = True) ‑> list +
+
+

Retrieves the first n and/or last n values from memory. If n is None, all values are retrieved.

+

Args

+
+
first_n : int
+
The number of first values to retrieve.
+
last_n : int
+
The number of last values to retrieve.
+
include_omission_info : bool
+
Whether to include an information message when some values are omitted.
+
+

Returns

+
+
list
+
The retrieved values.
+
+
+ +Expand source code + +
def retrieve(self, first_n: int, last_n: int, include_omission_info:bool=True) -> list:
+    """
+    Retrieves the first n and/or last n values from memory. If n is None, all values are retrieved.
+
+    Args:
+        first_n (int): The number of first values to retrieve.
+        last_n (int): The number of last values to retrieve.
+        include_omission_info (bool): Whether to include an information message when some values are omitted.
 
-    def _post_deserialization_init(self):
-        super()._post_deserialization_init()
+    Returns:
+        list: The retrieved values.
     
-        self.add_documents_paths(self.documents_paths)
-        self.add_web_urls(self.documents_web_urls)
+ """ + raise NotImplementedError("Subclasses must implement this method.")
-

Ancestors

- -

Class variables

-
-
var suppress_attributes_from_serialization
+ +
+def retrieve_all(self) ‑> list +
-
+

Retrieves all values from memory.

+
+ +Expand source code + +
def retrieve_all(self) -> list:
+    """
+    Retrieves all values from memory.
+    """
+    raise NotImplementedError("Subclasses must implement this method.")
+
-
-

Methods

-
-
-def add_documents_path(self, documents_path: str) ‑> None +
+def retrieve_recent(self) ‑> list
-

Adds a path to a folder with documents used for semantic memory.

+

Retrieves the n most recent values from memory.

Expand source code -
def add_documents_path(self, documents_path:str) -> None:
+
def retrieve_recent(self) -> list:
     """
-    Adds a path to a folder with documents used for semantic memory.
+    Retrieves the n most recent values from memory.
     """
-
-    if documents_path not in self.documents_paths:
-        self.documents_paths.append(documents_path)
-        new_documents = SimpleDirectoryReader(documents_path).load_data()
-        self._add_documents(new_documents, lambda doc: doc.metadata["file_name"])
+ raise NotImplementedError("Subclasses must implement this method.")
-
-def add_documents_paths(self, documents_paths: list) ‑> None +
+def retrieve_relevant(self, relevance_target: str, top_k=5) ‑> list
-

Adds a path to a folder with documents used for semantic memory.

+

Retrieves all values from memory that are relevant to a given target.

Expand source code -
def add_documents_paths(self, documents_paths:list) -> None:
+
def retrieve_relevant(self, relevance_target:str, top_k=5) -> list:
     """
-    Adds a path to a folder with documents used for semantic memory.
+    Retrieves all values from memory that are relevant to a given target.
     """
-
-    if documents_paths is not None:
-        for documents_path in documents_paths:
-            self.add_documents_path(documents_path)
+ raise NotImplementedError("Subclasses must implement this method.")
-
-def add_web_url(self, web_url: str) ‑> None +
+def store(self, value: Any) ‑> None
-

Adds the data retrieved from the specified URL to documents used for semantic memory.

+

Stores a value in memory.

Expand source code -
def add_web_url(self, web_url:str) -> None:
+
def store(self, value: Any) -> None:
     """
-    Adds the data retrieved from the specified URL to documents used for semantic memory.
+    Stores a value in memory.
     """
-    # we do it like this because the add_web_urls could run scrapes in parallel, so it is better
-    # to implement this one in terms of the other
-    self.add_web_urls([web_url])
+ raise NotImplementedError("Subclasses must implement this method.")
-
-def add_web_urls(self, web_urls: list) ‑> None +
+

Inherited members

+ + +
+class TinyMentalFaculty +(name: str, requires_faculties: list = None)
-

Adds the data retrieved from the specified URLs to documents used for semantic memory.

+

Represents a mental faculty of an agent. Mental faculties are the cognitive abilities that an agent has.

+

Initializes the mental faculty.

+

Args

+
+
name : str
+
The name of the mental faculty.
+
requires_faculties : list
+
A list of mental faculties that this faculty requires to function properly.
+
Expand source code -
def add_web_urls(self, web_urls:list) -> None:
-    """ 
-    Adds the data retrieved from the specified URLs to documents used for semantic memory.
+
class TinyMentalFaculty(JsonSerializableRegistry):
+    """
+    Represents a mental faculty of an agent. Mental faculties are the cognitive abilities that an agent has.
     """
-    filtered_web_urls = [url for url in web_urls if url not in self.documents_web_urls]
-    self.documents_web_urls += filtered_web_urls
 
-    if len(filtered_web_urls) > 0:
-        new_documents = SimpleWebPageReader(html_to_text=True).load_data(filtered_web_urls)
-        self._add_documents(new_documents, lambda doc: doc.id_)
+ def __init__(self, name: str, requires_faculties: list=None) -> None: + """ + Initializes the mental faculty. + + Args: + name (str): The name of the mental faculty. + requires_faculties (list): A list of mental faculties that this faculty requires to function properly. + """ + self.name = name + + if requires_faculties is None: + self.requires_faculties = [] + else: + self.requires_faculties = requires_faculties + + def __str__(self) -> str: + return f"Mental Faculty: {self.name}" + + def __eq__(self, other): + if isinstance(other, TinyMentalFaculty): + return self.name == other.name + return False + + def process_action(self, agent, action: dict) -> bool: + """ + Processes an action related to this faculty. + + Args: + action (dict): The action to process. + + Returns: + bool: True if the action was successfully processed, False otherwise. + """ + raise NotImplementedError("Subclasses must implement this method.") + + def actions_definitions_prompt(self) -> str: + """ + Returns the prompt for defining a actions related to this faculty. + """ + raise NotImplementedError("Subclasses must implement this method.") + + def actions_constraints_prompt(self) -> str: + """ + Returns the prompt for defining constraints on actions related to this faculty. + """ + raise NotImplementedError("Subclasses must implement this method.")
+
+

Ancestors

+ +

Subclasses

+ +

Methods

+
+
+def actions_constraints_prompt(self) ‑> str +
+
+

Returns the prompt for defining constraints on actions related to this faculty.

+
+ +Expand source code + +
def actions_constraints_prompt(self) -> str:
+    """
+    Returns the prompt for defining constraints on actions related to this faculty.
+    """
+    raise NotImplementedError("Subclasses must implement this method.")
-
-def list_documents_names(self) ‑> list +
+def actions_definitions_prompt(self) ‑> str
-

Lists the names of the documents in memory.

+

Returns the prompt for defining a actions related to this faculty.

Expand source code -
def list_documents_names(self) -> list:
+
def actions_definitions_prompt(self) -> str:
     """
-    Lists the names of the documents in memory.
+    Returns the prompt for defining a actions related to this faculty.
     """
-    if self.filename_to_document is not None:
-        return list(self.filename_to_document.keys())
-    else:
-        return []
+ raise NotImplementedError("Subclasses must implement this method.")
-
-def retrieve_document_content_by_name(self, document_name: str) ‑> str +
+def process_action(self, agent, action: dict) ‑> bool
-

Retrieves a document by its name.

+

Processes an action related to this faculty.

+

Args

+
+
action : dict
+
The action to process.
+
+

Returns

+
+
bool
+
True if the action was successfully processed, False otherwise.
+
Expand source code -
def retrieve_document_content_by_name(self, document_name:str) -> str:
+
def process_action(self, agent, action: dict) -> bool:
     """
-    Retrieves a document by its name.
+    Processes an action related to this faculty.
+
+    Args:
+        action (dict): The action to process.
+    
+    Returns:
+        bool: True if the action was successfully processed, False otherwise.
     """
-    if self.filename_to_document is not None:
-        doc = self.filename_to_document[document_name]
-        if doc is not None:
-            content = "SOURCE: " + document_name
-            content += "\n" + "CONTENT: " + doc.text[:10000] # TODO a more intelligent way to limit the content
-            return content
-        else:
-            return None
-    else:
-        return None
+ raise NotImplementedError("Subclasses must implement this method.")

Inherited members

@@ -5245,8 +5245,8 @@

Inherited members

-
-class ToolUse +
+class TinyToolUse (tools: list)
@@ -5264,7 +5264,7 @@

Args

Expand source code -
class ToolUse(Faculty):
+
class TinyToolUse(TinyMentalFaculty):
     """
     Allows the agent to use tools to accomplish tasks. Tool usage is one of the most important cognitive skills
     humans and primates have as we know.
@@ -5300,18 +5300,18 @@ 

Args

Ancestors

Inherited members

@@ -5342,27 +5342,9 @@

Faculty

- - -
  • FilesAndWebGroundingFaculty

  • -

    Memory

    - -
  • -
  • RecallFaculty

  • @@ -5378,6 +5360,24 @@

    TinyMemory

    + +
  • +
  • +

    TinyMentalFaculty

    + +
  • +
  • TinyPerson

  • diff --git a/docs/api/tinytroupe/control.html b/docs/api/tinytroupe/control.html index dae2587..c00b2d5 100644 --- a/docs/api/tinytroupe/control.html +++ b/docs/api/tinytroupe/control.html @@ -101,7 +101,7 @@

    Module tinytroupe.control

    # local import to avoid circular dependencies from tinytroupe.agent import TinyPerson from tinytroupe.environment import TinyWorld - from tinytroupe.personfactory import TinyFactory + from tinytroupe.factory import TinyFactory if self.status == Simulation.STATUS_STOPPED: self.status = Simulation.STATUS_STARTED @@ -408,7 +408,7 @@

    Module tinytroupe.control

    # local import to avoid circular dependencies from tinytroupe.agent import TinyPerson from tinytroupe.environment import TinyWorld - from tinytroupe.personfactory import TinyFactory + from tinytroupe.factory import TinyFactory self.obj_under_transaction = obj_under_transaction self.simulation = simulation @@ -513,7 +513,7 @@

    Module tinytroupe.control

    # local import to avoid circular dependencies from tinytroupe.agent import TinyPerson from tinytroupe.environment import TinyWorld - from tinytroupe.personfactory import TinyFactory + from tinytroupe.factory import TinyFactory # if the output is a TinyPerson, encode it @@ -541,7 +541,7 @@

    Module tinytroupe.control

    # local import to avoid circular dependencies from tinytroupe.agent import TinyPerson from tinytroupe.environment import TinyWorld - from tinytroupe.personfactory import TinyFactory + from tinytroupe.factory import TinyFactory if encoded_output is None: return None @@ -901,7 +901,7 @@

    Ancestors

    # local import to avoid circular dependencies from tinytroupe.agent import TinyPerson from tinytroupe.environment import TinyWorld - from tinytroupe.personfactory import TinyFactory + from tinytroupe.factory import TinyFactory if self.status == Simulation.STATUS_STOPPED: self.status = Simulation.STATUS_STARTED @@ -1303,7 +1303,7 @@

    Args

    # local import to avoid circular dependencies from tinytroupe.agent import TinyPerson from tinytroupe.environment import TinyWorld - from tinytroupe.personfactory import TinyFactory + from tinytroupe.factory import TinyFactory if self.status == Simulation.STATUS_STOPPED: self.status = Simulation.STATUS_STARTED @@ -1455,7 +1455,7 @@

    Ancestors

    # local import to avoid circular dependencies from tinytroupe.agent import TinyPerson from tinytroupe.environment import TinyWorld - from tinytroupe.personfactory import TinyFactory + from tinytroupe.factory import TinyFactory self.obj_under_transaction = obj_under_transaction self.simulation = simulation @@ -1560,7 +1560,7 @@

    Ancestors

    # local import to avoid circular dependencies from tinytroupe.agent import TinyPerson from tinytroupe.environment import TinyWorld - from tinytroupe.personfactory import TinyFactory + from tinytroupe.factory import TinyFactory # if the output is a TinyPerson, encode it @@ -1588,7 +1588,7 @@

    Ancestors

    # local import to avoid circular dependencies from tinytroupe.agent import TinyPerson from tinytroupe.environment import TinyWorld - from tinytroupe.personfactory import TinyFactory + from tinytroupe.factory import TinyFactory if encoded_output is None: return None diff --git a/docs/api/tinytroupe/enrichment.html b/docs/api/tinytroupe/enrichment.html index 8bd87bb..c9256a7 100644 --- a/docs/api/tinytroupe/enrichment.html +++ b/docs/api/tinytroupe/enrichment.html @@ -35,14 +35,14 @@

    Module tinytroupe.enrichment

    from tinytroupe.agent import TinyPerson from tinytroupe.environment import TinyWorld -from tinytroupe.personfactory import TinyPersonFactory +from tinytroupe.factory import TinyPersonFactory from tinytroupe.utils import JsonSerializableRegistry from tinytroupe import openai_utils import tinytroupe.utils as utils -class Enricher(JsonSerializableRegistry): +class TinyEnricher(JsonSerializableRegistry): def __init__(self, use_past_results_in_context=False) -> None: self.use_past_results_in_context = use_past_results_in_context @@ -83,8 +83,8 @@

    Module tinytroupe.enrichment

    Classes

    -
    -class Enricher +
    +class TinyEnricher (use_past_results_in_context=False)
    @@ -93,7 +93,7 @@

    Classes

    Expand source code -
    class Enricher(JsonSerializableRegistry):
    +
    class TinyEnricher(JsonSerializableRegistry):
     
         def __init__(self, use_past_results_in_context=False) -> None:
             self.use_past_results_in_context = use_past_results_in_context
    @@ -129,7 +129,7 @@ 

    Ancestors

    Methods

    -
    +
    def enrich_content(self, requirements: str, content: str, content_type: str = None, context_info: str = '', context_cache: list = None, verbose: bool = False)
    @@ -190,9 +190,9 @@

    Index

  • Classes

    diff --git a/docs/api/tinytroupe/environment.html b/docs/api/tinytroupe/environment.html index cfabf87..3bc6c80 100644 --- a/docs/api/tinytroupe/environment.html +++ b/docs/api/tinytroupe/environment.html @@ -1086,7 +1086,7 @@

    Inherited members

  • class TinyWorld -(name: str = 'A TinyWorld', agents=[], initial_datetime=datetime.datetime(2024, 11, 6, 23, 14, 30, 997928), broadcast_if_no_target=True) +(name: str = 'A TinyWorld', agents=[], initial_datetime=datetime.datetime(2024, 11, 11, 0, 50, 46, 904535), broadcast_if_no_target=True)

    Base class for environments.

    diff --git a/docs/api/tinytroupe/extraction.html b/docs/api/tinytroupe/extraction.html index 3648888..07b645f 100644 --- a/docs/api/tinytroupe/extraction.html +++ b/docs/api/tinytroupe/extraction.html @@ -58,14 +58,14 @@

    Module tinytroupe.extraction

    from tinytroupe.agent import TinyPerson from tinytroupe.environment import TinyWorld -from tinytroupe.personfactory import TinyPersonFactory +from tinytroupe.factory import TinyPersonFactory from tinytroupe.utils import JsonSerializableRegistry from tinytroupe import openai_utils import tinytroupe.utils as utils -class InteractionResultsExtractor: +class ResultsExtractor: def __init__(self): self._extraction_prompt_template_path = os.path.join(os.path.dirname(__file__), 'prompts/interaction_results_extractor.mustache') @@ -80,6 +80,7 @@

    Module tinytroupe.extraction

    extraction_objective:str="The main points present in the agent's interactions history.", situation:str = "", fields:list=None, + fields_hints:dict=None, verbose:bool=False): """ Extracts results from a TinyPerson instance. @@ -99,6 +100,9 @@

    Module tinytroupe.extraction

    if fields is not None: rendering_configs["fields"] = ", ".join(fields) + if fields_hints is not None: + rendering_configs["fields_hints"] = list(fields_hints.items()) + messages.append({"role": "system", "content": chevron.render( open(self._extraction_prompt_template_path).read(), @@ -149,6 +153,7 @@

    Module tinytroupe.extraction

    extraction_objective:str="The main points that can be derived from the agents conversations and actions.", situation:str="", fields:list=None, + fields_hints:dict=None, verbose:bool=False): """ Extracts results from a TinyWorld instance. @@ -168,6 +173,9 @@

    Module tinytroupe.extraction

    if fields is not None: rendering_configs["fields"] = ", ".join(fields) + if fields_hints is not None: + rendering_configs["fields_hints"] = list(fields_hints.items()) + messages.append({"role": "system", "content": chevron.render( open(self._extraction_prompt_template_path).read(), @@ -229,7 +237,7 @@

    Module tinytroupe.extraction

    -class InteractionResultsReducer: +class ResultsReducer: def __init__(self): self.results = {} @@ -538,7 +546,7 @@

    Module tinytroupe.extraction

    ################################################################################ # default extractor -default_extractor = InteractionResultsExtractor()
    +default_extractor = ResultsExtractor()
    @@ -796,8 +804,221 @@

    Inherited members

    -
    -class InteractionResultsExtractor +
    +class Normalizer +(elements: List[str], n: int, verbose: bool = False) +
    +
    +

    A mechanism to normalize passages, concepts and other textual elements.

    +

    Normalizes the specified elements.

    +

    Args

    +
    +
    elements : list
    +
    The elements to normalize.
    +
    n : int
    +
    The number of normalized elements to output.
    +
    verbose : bool, optional
    +
    Whether to print debug messages. Defaults to False.
    +
    +
    + +Expand source code + +
    class Normalizer:
    +    """
    +    A mechanism to normalize passages, concepts and other textual elements.
    +    """
    +
    +    def __init__(self, elements:List[str], n:int, verbose:bool=False):
    +        """
    +        Normalizes the specified elements.
    +
    +        Args:
    +            elements (list): The elements to normalize.
    +            n (int): The number of normalized elements to output.
    +            verbose (bool, optional): Whether to print debug messages. Defaults to False.
    +        """
    +        # ensure elements are unique
    +        self.elements = list(set(elements))
    +        
    +        self.n = n  
    +        self.verbose = verbose 
    +        
    +        # a JSON-based structure, where each output element is a key to a list of input elements that were merged into it
    +        self.normalized_elements = None
    +        # a dict that maps each input element to its normalized output. This will be used as cache later.
    +        self.normalizing_map = {}      
    +
    +        rendering_configs = {"n": n,
    +                             "elements": self.elements}
    +
    +        messages = utils.compose_initial_LLM_messages_with_templates("normalizer.system.mustache", "normalizer.user.mustache", rendering_configs)
    +        next_message = openai_utils.client().send_message(messages, temperature=0.1)
    +        
    +        debug_msg = f"Normalization result message: {next_message}"
    +        logger.debug(debug_msg)
    +        if self.verbose:
    +            print(debug_msg)
    +
    +        result = utils.extract_json(next_message["content"])
    +        logger.debug(result)
    +        if self.verbose:
    +            print(result)
    +
    +        self.normalized_elements = result
    +
    +    
    +    def normalize(self, element_or_elements:Union[str, List[str]]) -> Union[str, List[str]]:
    +        """
    +        Normalizes the specified element or elements.
    +
    +        This method uses a caching mechanism to improve performance. If an element has been normalized before, 
    +        its normalized form is stored in a cache (self.normalizing_map). When the same element needs to be 
    +        normalized again, the method will first check the cache and use the stored normalized form if available, 
    +        instead of normalizing the element again.
    +
    +        The order of elements in the output will be the same as in the input. This is ensured by processing 
    +        the elements in the order they appear in the input and appending the normalized elements to the output 
    +        list in the same order.
    +
    +        Args:
    +            element_or_elements (Union[str, List[str]]): The element or elements to normalize.
    +
    +        Returns:
    +            str: The normalized element if the input was a string.
    +            list: The normalized elements if the input was a list, preserving the order of elements in the input.
    +        """
    +        if isinstance(element_or_elements, str):
    +            denormalized_elements = [element_or_elements]
    +        elif isinstance(element_or_elements, list):
    +            denormalized_elements = element_or_elements
    +        else:
    +            raise ValueError("The element_or_elements must be either a string or a list.")
    +        
    +        normalized_elements = []
    +        elements_to_normalize = []
    +        for element in denormalized_elements:
    +            if element not in self.normalizing_map:
    +                elements_to_normalize.append(element)
    +        
    +        if elements_to_normalize:
    +            rendering_configs = {"categories": self.normalized_elements,
    +                                    "elements": elements_to_normalize}
    +            
    +            messages = utils.compose_initial_LLM_messages_with_templates("normalizer.applier.system.mustache", "normalizer.applier.user.mustache", rendering_configs)
    +            next_message = openai_utils.client().send_message(messages, temperature=0.1)
    +            
    +            debug_msg = f"Normalization result message: {next_message}"
    +            logger.debug(debug_msg)
    +            if self.verbose:
    +                print(debug_msg)
    +    
    +            normalized_elements_from_llm = utils.extract_json(next_message["content"])
    +            assert isinstance(normalized_elements_from_llm, list), "The normalized element must be a list."
    +            assert len(normalized_elements_from_llm) == len(elements_to_normalize), "The number of normalized elements must be equal to the number of elements to normalize."
    +    
    +            for i, element in enumerate(elements_to_normalize):
    +                normalized_element = normalized_elements_from_llm[i]
    +                self.normalizing_map[element] = normalized_element
    +        
    +        for element in denormalized_elements:
    +            normalized_elements.append(self.normalizing_map[element])
    +        
    +        return normalized_elements
    +
    +

    Methods

    +
    +
    +def normalize(self, element_or_elements: Union[str, List[str]]) ‑> Union[str, List[str]] +
    +
    +

    Normalizes the specified element or elements.

    +

    This method uses a caching mechanism to improve performance. If an element has been normalized before, +its normalized form is stored in a cache (self.normalizing_map). When the same element needs to be +normalized again, the method will first check the cache and use the stored normalized form if available, +instead of normalizing the element again.

    +

    The order of elements in the output will be the same as in the input. This is ensured by processing +the elements in the order they appear in the input and appending the normalized elements to the output +list in the same order.

    +

    Args

    +
    +
    element_or_elements : Union[str, List[str]]
    +
    The element or elements to normalize.
    +
    +

    Returns

    +
    +
    str
    +
    The normalized element if the input was a string.
    +
    list
    +
    The normalized elements if the input was a list, preserving the order of elements in the input.
    +
    +
    + +Expand source code + +
    def normalize(self, element_or_elements:Union[str, List[str]]) -> Union[str, List[str]]:
    +    """
    +    Normalizes the specified element or elements.
    +
    +    This method uses a caching mechanism to improve performance. If an element has been normalized before, 
    +    its normalized form is stored in a cache (self.normalizing_map). When the same element needs to be 
    +    normalized again, the method will first check the cache and use the stored normalized form if available, 
    +    instead of normalizing the element again.
    +
    +    The order of elements in the output will be the same as in the input. This is ensured by processing 
    +    the elements in the order they appear in the input and appending the normalized elements to the output 
    +    list in the same order.
    +
    +    Args:
    +        element_or_elements (Union[str, List[str]]): The element or elements to normalize.
    +
    +    Returns:
    +        str: The normalized element if the input was a string.
    +        list: The normalized elements if the input was a list, preserving the order of elements in the input.
    +    """
    +    if isinstance(element_or_elements, str):
    +        denormalized_elements = [element_or_elements]
    +    elif isinstance(element_or_elements, list):
    +        denormalized_elements = element_or_elements
    +    else:
    +        raise ValueError("The element_or_elements must be either a string or a list.")
    +    
    +    normalized_elements = []
    +    elements_to_normalize = []
    +    for element in denormalized_elements:
    +        if element not in self.normalizing_map:
    +            elements_to_normalize.append(element)
    +    
    +    if elements_to_normalize:
    +        rendering_configs = {"categories": self.normalized_elements,
    +                                "elements": elements_to_normalize}
    +        
    +        messages = utils.compose_initial_LLM_messages_with_templates("normalizer.applier.system.mustache", "normalizer.applier.user.mustache", rendering_configs)
    +        next_message = openai_utils.client().send_message(messages, temperature=0.1)
    +        
    +        debug_msg = f"Normalization result message: {next_message}"
    +        logger.debug(debug_msg)
    +        if self.verbose:
    +            print(debug_msg)
    +
    +        normalized_elements_from_llm = utils.extract_json(next_message["content"])
    +        assert isinstance(normalized_elements_from_llm, list), "The normalized element must be a list."
    +        assert len(normalized_elements_from_llm) == len(elements_to_normalize), "The number of normalized elements must be equal to the number of elements to normalize."
    +
    +        for i, element in enumerate(elements_to_normalize):
    +            normalized_element = normalized_elements_from_llm[i]
    +            self.normalizing_map[element] = normalized_element
    +    
    +    for element in denormalized_elements:
    +        normalized_elements.append(self.normalizing_map[element])
    +    
    +    return normalized_elements
    +
    +
    +
    +
    +
    +class ResultsExtractor
    @@ -805,7 +1026,7 @@

    Inherited members

    Expand source code -
    class InteractionResultsExtractor:
    +
    class ResultsExtractor:
     
         def __init__(self):
             self._extraction_prompt_template_path = os.path.join(os.path.dirname(__file__), 'prompts/interaction_results_extractor.mustache')
    @@ -820,6 +1041,7 @@ 

    Inherited members

    extraction_objective:str="The main points present in the agent's interactions history.", situation:str = "", fields:list=None, + fields_hints:dict=None, verbose:bool=False): """ Extracts results from a TinyPerson instance. @@ -839,6 +1061,9 @@

    Inherited members

    if fields is not None: rendering_configs["fields"] = ", ".join(fields) + if fields_hints is not None: + rendering_configs["fields_hints"] = list(fields_hints.items()) + messages.append({"role": "system", "content": chevron.render( open(self._extraction_prompt_template_path).read(), @@ -889,6 +1114,7 @@

    Inherited members

    extraction_objective:str="The main points that can be derived from the agents conversations and actions.", situation:str="", fields:list=None, + fields_hints:dict=None, verbose:bool=False): """ Extracts results from a TinyWorld instance. @@ -908,6 +1134,9 @@

    Inherited members

    if fields is not None: rendering_configs["fields"] = ", ".join(fields) + if fields_hints is not None: + rendering_configs["fields_hints"] = list(fields_hints.items()) + messages.append({"role": "system", "content": chevron.render( open(self._extraction_prompt_template_path).read(), @@ -969,8 +1198,8 @@

    Inherited members

    Methods

    -
    -def extract_results_from_agent(self, tinyperson: TinyPerson, extraction_objective: str = "The main points present in the agent's interactions history.", situation: str = '', fields: list = None, verbose: bool = False) +
    +def extract_results_from_agent(self, tinyperson: TinyPerson, extraction_objective: str = "The main points present in the agent's interactions history.", situation: str = '', fields: list = None, fields_hints: dict = None, verbose: bool = False)

    Extracts results from a TinyPerson instance.

    @@ -997,6 +1226,7 @@

    Args

    extraction_objective:str="The main points present in the agent's interactions history.", situation:str = "", fields:list=None, + fields_hints:dict=None, verbose:bool=False): """ Extracts results from a TinyPerson instance. @@ -1016,6 +1246,9 @@

    Args

    if fields is not None: rendering_configs["fields"] = ", ".join(fields) + if fields_hints is not None: + rendering_configs["fields_hints"] = list(fields_hints.items()) + messages.append({"role": "system", "content": chevron.render( open(self._extraction_prompt_template_path).read(), @@ -1061,8 +1294,8 @@

    Args

    return result
    -
    -def extract_results_from_world(self, tinyworld: TinyWorld, extraction_objective: str = 'The main points that can be derived from the agents conversations and actions.', situation: str = '', fields: list = None, verbose: bool = False) +
    +def extract_results_from_world(self, tinyworld: TinyWorld, extraction_objective: str = 'The main points that can be derived from the agents conversations and actions.', situation: str = '', fields: list = None, fields_hints: dict = None, verbose: bool = False)

    Extracts results from a TinyWorld instance.

    @@ -1089,6 +1322,7 @@

    Args

    extraction_objective:str="The main points that can be derived from the agents conversations and actions.", situation:str="", fields:list=None, + fields_hints:dict=None, verbose:bool=False): """ Extracts results from a TinyWorld instance. @@ -1108,6 +1342,9 @@

    Args

    if fields is not None: rendering_configs["fields"] = ", ".join(fields) + if fields_hints is not None: + rendering_configs["fields_hints"] = list(fields_hints.items()) + messages.append({"role": "system", "content": chevron.render( open(self._extraction_prompt_template_path).read(), @@ -1153,7 +1390,7 @@

    Args

    return result
    -
    +
    def save_as_json(self, filename: str, verbose: bool = False)
    @@ -1187,8 +1424,8 @@

    Args

    -
    -class InteractionResultsReducer +
    +class ResultsReducer
    @@ -1196,7 +1433,7 @@

    Args

    Expand source code -
    class InteractionResultsReducer:
    +
    class ResultsReducer:
     
         def __init__(self):
             self.results = {}
    @@ -1248,7 +1485,7 @@ 

    Args

    Methods

    -
    +
    def add_reduction_rule(self, trigger: str, func: )
    @@ -1264,7 +1501,7 @@

    Methods

    self.rules[trigger] = func
    -
    +
    def reduce_agent(self, agent: TinyPerson) ‑> list
    @@ -1307,7 +1544,7 @@

    Methods

    return reduction
    -
    +
    def reduce_agent_to_dataframe(self, agent: TinyPerson, column_names: list = None) ‑> pandas.core.frame.DataFrame
    @@ -1323,219 +1560,6 @@

    Methods

    -
    -class Normalizer -(elements: List[str], n: int, verbose: bool = False) -
    -
    -

    A mechanism to normalize passages, concepts and other textual elements.

    -

    Normalizes the specified elements.

    -

    Args

    -
    -
    elements : list
    -
    The elements to normalize.
    -
    n : int
    -
    The number of normalized elements to output.
    -
    verbose : bool, optional
    -
    Whether to print debug messages. Defaults to False.
    -
    -
    - -Expand source code - -
    class Normalizer:
    -    """
    -    A mechanism to normalize passages, concepts and other textual elements.
    -    """
    -
    -    def __init__(self, elements:List[str], n:int, verbose:bool=False):
    -        """
    -        Normalizes the specified elements.
    -
    -        Args:
    -            elements (list): The elements to normalize.
    -            n (int): The number of normalized elements to output.
    -            verbose (bool, optional): Whether to print debug messages. Defaults to False.
    -        """
    -        # ensure elements are unique
    -        self.elements = list(set(elements))
    -        
    -        self.n = n  
    -        self.verbose = verbose 
    -        
    -        # a JSON-based structure, where each output element is a key to a list of input elements that were merged into it
    -        self.normalized_elements = None
    -        # a dict that maps each input element to its normalized output. This will be used as cache later.
    -        self.normalizing_map = {}      
    -
    -        rendering_configs = {"n": n,
    -                             "elements": self.elements}
    -
    -        messages = utils.compose_initial_LLM_messages_with_templates("normalizer.system.mustache", "normalizer.user.mustache", rendering_configs)
    -        next_message = openai_utils.client().send_message(messages, temperature=0.1)
    -        
    -        debug_msg = f"Normalization result message: {next_message}"
    -        logger.debug(debug_msg)
    -        if self.verbose:
    -            print(debug_msg)
    -
    -        result = utils.extract_json(next_message["content"])
    -        logger.debug(result)
    -        if self.verbose:
    -            print(result)
    -
    -        self.normalized_elements = result
    -
    -    
    -    def normalize(self, element_or_elements:Union[str, List[str]]) -> Union[str, List[str]]:
    -        """
    -        Normalizes the specified element or elements.
    -
    -        This method uses a caching mechanism to improve performance. If an element has been normalized before, 
    -        its normalized form is stored in a cache (self.normalizing_map). When the same element needs to be 
    -        normalized again, the method will first check the cache and use the stored normalized form if available, 
    -        instead of normalizing the element again.
    -
    -        The order of elements in the output will be the same as in the input. This is ensured by processing 
    -        the elements in the order they appear in the input and appending the normalized elements to the output 
    -        list in the same order.
    -
    -        Args:
    -            element_or_elements (Union[str, List[str]]): The element or elements to normalize.
    -
    -        Returns:
    -            str: The normalized element if the input was a string.
    -            list: The normalized elements if the input was a list, preserving the order of elements in the input.
    -        """
    -        if isinstance(element_or_elements, str):
    -            denormalized_elements = [element_or_elements]
    -        elif isinstance(element_or_elements, list):
    -            denormalized_elements = element_or_elements
    -        else:
    -            raise ValueError("The element_or_elements must be either a string or a list.")
    -        
    -        normalized_elements = []
    -        elements_to_normalize = []
    -        for element in denormalized_elements:
    -            if element not in self.normalizing_map:
    -                elements_to_normalize.append(element)
    -        
    -        if elements_to_normalize:
    -            rendering_configs = {"categories": self.normalized_elements,
    -                                    "elements": elements_to_normalize}
    -            
    -            messages = utils.compose_initial_LLM_messages_with_templates("normalizer.applier.system.mustache", "normalizer.applier.user.mustache", rendering_configs)
    -            next_message = openai_utils.client().send_message(messages, temperature=0.1)
    -            
    -            debug_msg = f"Normalization result message: {next_message}"
    -            logger.debug(debug_msg)
    -            if self.verbose:
    -                print(debug_msg)
    -    
    -            normalized_elements_from_llm = utils.extract_json(next_message["content"])
    -            assert isinstance(normalized_elements_from_llm, list), "The normalized element must be a list."
    -            assert len(normalized_elements_from_llm) == len(elements_to_normalize), "The number of normalized elements must be equal to the number of elements to normalize."
    -    
    -            for i, element in enumerate(elements_to_normalize):
    -                normalized_element = normalized_elements_from_llm[i]
    -                self.normalizing_map[element] = normalized_element
    -        
    -        for element in denormalized_elements:
    -            normalized_elements.append(self.normalizing_map[element])
    -        
    -        return normalized_elements
    -
    -

    Methods

    -
    -
    -def normalize(self, element_or_elements: Union[str, List[str]]) ‑> Union[str, List[str]] -
    -
    -

    Normalizes the specified element or elements.

    -

    This method uses a caching mechanism to improve performance. If an element has been normalized before, -its normalized form is stored in a cache (self.normalizing_map). When the same element needs to be -normalized again, the method will first check the cache and use the stored normalized form if available, -instead of normalizing the element again.

    -

    The order of elements in the output will be the same as in the input. This is ensured by processing -the elements in the order they appear in the input and appending the normalized elements to the output -list in the same order.

    -

    Args

    -
    -
    element_or_elements : Union[str, List[str]]
    -
    The element or elements to normalize.
    -
    -

    Returns

    -
    -
    str
    -
    The normalized element if the input was a string.
    -
    list
    -
    The normalized elements if the input was a list, preserving the order of elements in the input.
    -
    -
    - -Expand source code - -
    def normalize(self, element_or_elements:Union[str, List[str]]) -> Union[str, List[str]]:
    -    """
    -    Normalizes the specified element or elements.
    -
    -    This method uses a caching mechanism to improve performance. If an element has been normalized before, 
    -    its normalized form is stored in a cache (self.normalizing_map). When the same element needs to be 
    -    normalized again, the method will first check the cache and use the stored normalized form if available, 
    -    instead of normalizing the element again.
    -
    -    The order of elements in the output will be the same as in the input. This is ensured by processing 
    -    the elements in the order they appear in the input and appending the normalized elements to the output 
    -    list in the same order.
    -
    -    Args:
    -        element_or_elements (Union[str, List[str]]): The element or elements to normalize.
    -
    -    Returns:
    -        str: The normalized element if the input was a string.
    -        list: The normalized elements if the input was a list, preserving the order of elements in the input.
    -    """
    -    if isinstance(element_or_elements, str):
    -        denormalized_elements = [element_or_elements]
    -    elif isinstance(element_or_elements, list):
    -        denormalized_elements = element_or_elements
    -    else:
    -        raise ValueError("The element_or_elements must be either a string or a list.")
    -    
    -    normalized_elements = []
    -    elements_to_normalize = []
    -    for element in denormalized_elements:
    -        if element not in self.normalizing_map:
    -            elements_to_normalize.append(element)
    -    
    -    if elements_to_normalize:
    -        rendering_configs = {"categories": self.normalized_elements,
    -                                "elements": elements_to_normalize}
    -        
    -        messages = utils.compose_initial_LLM_messages_with_templates("normalizer.applier.system.mustache", "normalizer.applier.user.mustache", rendering_configs)
    -        next_message = openai_utils.client().send_message(messages, temperature=0.1)
    -        
    -        debug_msg = f"Normalization result message: {next_message}"
    -        logger.debug(debug_msg)
    -        if self.verbose:
    -            print(debug_msg)
    -
    -        normalized_elements_from_llm = utils.extract_json(next_message["content"])
    -        assert isinstance(normalized_elements_from_llm, list), "The normalized element must be a list."
    -        assert len(normalized_elements_from_llm) == len(elements_to_normalize), "The number of normalized elements must be equal to the number of elements to normalize."
    -
    -        for i, element in enumerate(elements_to_normalize):
    -            normalized_element = normalized_elements_from_llm[i]
    -            self.normalizing_map[element] = normalized_element
    -    
    -    for element in denormalized_elements:
    -        normalized_elements.append(self.normalizing_map[element])
    -    
    -    return normalized_elements
    -
    -
    -
    -
    @@ -1559,25 +1583,25 @@

    InteractionResultsExtractor

    +

    Normalizer

  • -

    InteractionResultsReducer

    +

    ResultsExtractor

  • -

    Normalizer

    +

    ResultsReducer

  • diff --git a/docs/api/tinytroupe/personfactory.html b/docs/api/tinytroupe/factory.html similarity index 87% rename from docs/api/tinytroupe/personfactory.html rename to docs/api/tinytroupe/factory.html index c28f64d..8c975f4 100644 --- a/docs/api/tinytroupe/personfactory.html +++ b/docs/api/tinytroupe/factory.html @@ -4,7 +4,7 @@ -tinytroupe.personfactory API documentation +tinytroupe.factory API documentation @@ -19,7 +19,7 @@
    -

    Module tinytroupe.personfactory

    +

    Module tinytroupe.factory

    @@ -147,17 +147,17 @@

    Module tinytroupe.personfactory

    logger.info(f"Starting the generation of the {number_of_factories} person factories based on that context: {generic_context_text}") - person_factories_prompt = open(os.path.join(os.path.dirname(__file__), 'prompts/generate_person_factory.md')).read() + system_prompt = open(os.path.join(os.path.dirname(__file__), 'prompts/generate_person_factory.md')).read() messages = [] - messages.append({"role": "system", "content": person_factories_prompt}) + messages.append({"role": "system", "content": system_prompt}) - prompt = chevron.render("Please, create {{number_of_factories}} person descriptions based on the following broad context: {{context}}", { + user_prompt = chevron.render("Please, create {{number_of_factories}} person descriptions based on the following broad context: {{context}}", { "number_of_factories": number_of_factories, "context": generic_context_text }) - messages.append({"role": "user", "content": prompt}) + messages.append({"role": "user", "content": user_prompt}) response = openai_utils.client().send_message(messages) @@ -269,7 +269,7 @@

    Module tinytroupe.personfactory

    Classes

    -
    +
    class TinyFactory (simulation_id: str = None)
    @@ -365,18 +365,18 @@

    Args

    Subclasses

    Class variables

    -
    var all_factories
    +
    var all_factories

    Static methods

    -
    +
    def add_factory(factory)
    @@ -398,7 +398,7 @@

    Static methods

    TinyFactory.all_factories[factory.name] = factory
    -
    +
    def clear_factories()
    @@ -415,7 +415,7 @@

    Static methods

    TinyFactory.all_factories = {}
    -
    +
    def set_simulation_for_free_factories(simulation)
    @@ -439,7 +439,7 @@

    Static methods

    Methods

    -
    +
    def decode_complete_state(self, state: dict)
    @@ -458,7 +458,7 @@

    Methods

    return self
    -
    +
    def encode_complete_state(self) ‑> dict
    @@ -478,7 +478,7 @@

    Methods

    -
    +
    class TinyPersonFactory (context_text, simulation_id: str = None)
    @@ -528,17 +528,17 @@

    Args

    logger.info(f"Starting the generation of the {number_of_factories} person factories based on that context: {generic_context_text}") - person_factories_prompt = open(os.path.join(os.path.dirname(__file__), 'prompts/generate_person_factory.md')).read() + system_prompt = open(os.path.join(os.path.dirname(__file__), 'prompts/generate_person_factory.md')).read() messages = [] - messages.append({"role": "system", "content": person_factories_prompt}) + messages.append({"role": "system", "content": system_prompt}) - prompt = chevron.render("Please, create {{number_of_factories}} person descriptions based on the following broad context: {{context}}", { + user_prompt = chevron.render("Please, create {{number_of_factories}} person descriptions based on the following broad context: {{context}}", { "number_of_factories": number_of_factories, "context": generic_context_text }) - messages.append({"role": "user", "content": prompt}) + messages.append({"role": "user", "content": user_prompt}) response = openai_utils.client().send_message(messages) @@ -641,11 +641,11 @@

    Args

    Ancestors

    Static methods

    -
    +
    def generate_person_factories(number_of_factories, generic_context_text)
    @@ -681,17 +681,17 @@

    Returns

    logger.info(f"Starting the generation of the {number_of_factories} person factories based on that context: {generic_context_text}") - person_factories_prompt = open(os.path.join(os.path.dirname(__file__), 'prompts/generate_person_factory.md')).read() + system_prompt = open(os.path.join(os.path.dirname(__file__), 'prompts/generate_person_factory.md')).read() messages = [] - messages.append({"role": "system", "content": person_factories_prompt}) + messages.append({"role": "system", "content": system_prompt}) - prompt = chevron.render("Please, create {{number_of_factories}} person descriptions based on the following broad context: {{context}}", { + user_prompt = chevron.render("Please, create {{number_of_factories}} person descriptions based on the following broad context: {{context}}", { "number_of_factories": number_of_factories, "context": generic_context_text }) - messages.append({"role": "user", "content": prompt}) + messages.append({"role": "user", "content": user_prompt}) response = openai_utils.client().send_message(messages) @@ -711,7 +711,7 @@

    Returns

    Methods

    -
    +
    def generate_person(self, agent_particularities: str = None, temperature: float = 1.5, attepmpts: int = 5)
    @@ -798,13 +798,13 @@

    Returns

    Inherited members

    @@ -826,21 +826,21 @@

    Index

  • Classes

    diff --git a/docs/api/tinytroupe/index.html b/docs/api/tinytroupe/index.html index bf7c0da..b1acf0b 100644 --- a/docs/api/tinytroupe/index.html +++ b/docs/api/tinytroupe/index.html @@ -37,13 +37,6 @@

    Package tinytroupe

    sys.path.append('.') from tinytroupe import utils # now we can import our utils -config = utils.read_config_file() -utils.start_logger(config) - -# fix an issue in the rich library: we don't want margins in Jupyter! -rich.jupyter.JUPYTER_HTML_FORMAT = \ - utils.inject_html_css_style_prefix(rich.jupyter.JUPYTER_HTML_FORMAT, "margin:0px;") - # AI disclaimers print(\ """ @@ -52,7 +45,15 @@

    Package tinytroupe

    The AI models are not perfect and may produce inappropriate or inacurate results. For any serious or consequential use, please review the generated content before using it. !!!! -""")
    +""") + +config = utils.read_config_file() +utils.pretty_print_config(config) +utils.start_logger(config) + +# fix an issue in the rich library: we don't want margins in Jupyter! +rich.jupyter.JUPYTER_HTML_FORMAT = \ + utils.inject_html_css_style_prefix(rich.jupyter.JUPYTER_HTML_FORMAT, "margin:0px;")
  • @@ -90,15 +91,11 @@

    Sub-modules

    Simulations produce a lot of data, and it is often useful to extract these data in a structured way. For instance, you might wish to: - Extract the …

    -
    tinytroupe.openai_utils
    -
    -
    -
    -
    tinytroupe.personchecker
    +
    tinytroupe.factory
    -
    tinytroupe.personfactory
    +
    tinytroupe.openai_utils
    @@ -119,6 +116,10 @@

    Sub-modules

    General utilities and convenience functions.

    +
    tinytroupe.validation
    +
    +
    +
    @@ -143,13 +144,13 @@

    Index

  • tinytroupe.examples
  • tinytroupe.experimentation
  • tinytroupe.extraction
  • +
  • tinytroupe.factory
  • tinytroupe.openai_utils
  • -
  • tinytroupe.personchecker
  • -
  • tinytroupe.personfactory
  • tinytroupe.profiling
  • tinytroupe.story
  • tinytroupe.tools
  • tinytroupe.utils
  • +
  • tinytroupe.validation
  • diff --git a/docs/api/tinytroupe/openai_utils.html b/docs/api/tinytroupe/openai_utils.html index c22a8d8..a2d4520 100644 --- a/docs/api/tinytroupe/openai_utils.html +++ b/docs/api/tinytroupe/openai_utils.html @@ -1134,7 +1134,7 @@

    Methods

    -def send_message(self, current_messages, model='gpt-4o', temperature=0.3, max_tokens=4000, top_p=0, frequency_penalty=0.0, presence_penalty=0.0, stop=[], timeout=60.0, max_attempts=5.0, waiting_time=2.0, exponential_backoff_factor=5.0, n=1, echo=False) +def send_message(self, current_messages, model='gpt-4o', temperature=0.3, max_tokens=4000, top_p=0, frequency_penalty=0.0, presence_penalty=0.0, stop=[], timeout=60.0, max_attempts=5.0, waiting_time=1.0, exponential_backoff_factor=5.0, n=1, echo=False)

    Sends a message to the OpenAI API and returns the response.

    diff --git a/docs/api/tinytroupe/tools.html b/docs/api/tinytroupe/tools.html index 6a78645..41c8f82 100644 --- a/docs/api/tinytroupe/tools.html +++ b/docs/api/tinytroupe/tools.html @@ -39,7 +39,7 @@

    Module tinytroupe.tools

    import tinytroupe.utils as utils from tinytroupe.extraction import ArtifactExporter -from tinytroupe.enrichment import Enricher +from tinytroupe.enrichment import TinyEnricher from tinytroupe.utils import JsonSerializableRegistry diff --git a/docs/api/tinytroupe/utils.html b/docs/api/tinytroupe/utils.html index 82561ac..36a0e43 100644 --- a/docs/api/tinytroupe/utils.html +++ b/docs/api/tinytroupe/utils.html @@ -284,28 +284,41 @@

    Module tinytroupe.utils

    else: config = configparser.ConfigParser() - # first, try the directory of the current main program - config_file_path = Path.cwd() / "config.ini" + # Read the default values in the module directory. + config_file_path = Path(__file__).parent.absolute() / 'config.ini' + print(f"Looking for default config on: {config_file_path}") if verbose else None if config_file_path.exists(): - config.read(config_file_path) _config = config - return config else: - if verbose: - print(f"Failed to find custom config on: {config_file_path}") - print("Now switching to default config file...") + raise ValueError(f"Failed to find default config on: {config_file_path}") - # if nothing there, use the default one in the module directory - config_file_path = Path(__file__).parent.absolute() / 'config.ini' - print(f"Looking for config on: {config_file_path}") if verbose else None + # Now, let's override any specific default value, if there's a custom .ini config. + # Try the directory of the current main program + config_file_path = Path.cwd() / "config.ini" if config_file_path.exists(): - config.read(config_file_path) + print(f"Found custom config on: {config_file_path}") if verbose else None + config.read(config_file_path) # this only overrides the values that are present in the custom config _config = config return config else: - raise ValueError("Could not find config.ini file anywhere") - + if verbose: + print(f"Failed to find custom config on: {config_file_path}") if verbose else None + print("Will use only default values. IF THINGS FAIL, TRY CUSTOMIZING MODEL, API TYPE, etc.") if verbose else None + + return config + +def pretty_print_config(config): + print() + print("=================================") + print("Current TinyTroupe configuration ") + print("=================================") + for section in config.sections(): + print(f"[{section}]") + for key, value in config.items(section): + print(f"{key} = {value}") + print() + def start_logger(config: configparser.ConfigParser): # create logger logger = logging.getLogger("tinytroupe") @@ -859,6 +872,27 @@

    Returns

    return dt.strftime("%Y-%m-%d %H:%M")
    +
    +def pretty_print_config(config) +
    +
    +
    +
    + +Expand source code + +
    def pretty_print_config(config):
    +    print()
    +    print("=================================")
    +    print("Current TinyTroupe configuration ")
    +    print("=================================")
    +    for section in config.sections():
    +        print(f"[{section}]")
    +        for key, value in config.items(section):
    +            print(f"{key} = {value}")
    +        print()
    +
    +
    def read_config_file(use_cache=True, verbose=True) ‑> configparser.ConfigParser
    @@ -877,27 +911,29 @@

    Returns

    else: config = configparser.ConfigParser() - # first, try the directory of the current main program - config_file_path = Path.cwd() / "config.ini" + # Read the default values in the module directory. + config_file_path = Path(__file__).parent.absolute() / 'config.ini' + print(f"Looking for default config on: {config_file_path}") if verbose else None if config_file_path.exists(): - config.read(config_file_path) _config = config - return config else: - if verbose: - print(f"Failed to find custom config on: {config_file_path}") - print("Now switching to default config file...") + raise ValueError(f"Failed to find default config on: {config_file_path}") - # if nothing there, use the default one in the module directory - config_file_path = Path(__file__).parent.absolute() / 'config.ini' - print(f"Looking for config on: {config_file_path}") if verbose else None + # Now, let's override any specific default value, if there's a custom .ini config. + # Try the directory of the current main program + config_file_path = Path.cwd() / "config.ini" if config_file_path.exists(): - config.read(config_file_path) + print(f"Found custom config on: {config_file_path}") if verbose else None + config.read(config_file_path) # this only overrides the values that are present in the custom config _config = config return config else: - raise ValueError("Could not find config.ini file anywhere")
    + if verbose: + print(f"Failed to find custom config on: {config_file_path}") if verbose else None + print("Will use only default values. IF THINGS FAIL, TRY CUSTOMIZING MODEL, API TYPE, etc.") if verbose else None + + return config
    @@ -1188,9 +1224,9 @@

    Classes

    Subclasses

    @@ -1384,6 +1420,7 @@

    Index

  • name_or_empty
  • post_init
  • pretty_datetime
  • +
  • pretty_print_config
  • read_config_file
  • repeat_on_error
  • sanitize_dict
  • diff --git a/docs/api/tinytroupe/personchecker.html b/docs/api/tinytroupe/validation.html similarity index 96% rename from docs/api/tinytroupe/personchecker.html rename to docs/api/tinytroupe/validation.html index 7d5c959..1472d0b 100644 --- a/docs/api/tinytroupe/personchecker.html +++ b/docs/api/tinytroupe/validation.html @@ -4,7 +4,7 @@ -tinytroupe.personchecker API documentation +tinytroupe.validation API documentation @@ -19,7 +19,7 @@
    -

    Module tinytroupe.personchecker

    +

    Module tinytroupe.validation

    @@ -40,7 +40,7 @@

    Module tinytroupe.personchecker

    default_max_content_display_length = config["OpenAI"].getint("MAX_CONTENT_DISPLAY_LENGTH", 1024) -class TinyPersonChecker: +class TinyPersonValidator: @staticmethod def validate_person(person, expectations=None, include_agent_spec=True, max_content_length=default_max_content_display_length): @@ -136,8 +136,8 @@

    Module tinytroupe.personchecker

    Classes

    -
    -class TinyPersonChecker +
    +class TinyPersonValidator
    @@ -145,7 +145,7 @@

    Classes

    Expand source code -
    class TinyPersonChecker:
    +
    class TinyPersonValidator:
     
         @staticmethod
         def validate_person(person, expectations=None, include_agent_spec=True, max_content_length=default_max_content_display_length):
    @@ -233,7 +233,7 @@ 

    Classes

    Static methods

    -
    +
    def validate_person(person, expectations=None, include_agent_spec=True, max_content_length=1024)
    @@ -367,9 +367,9 @@

    Index

  • Classes

    diff --git a/docs/example_screenshot_brainstorming-2.png b/docs/example_screenshot_brainstorming-2.png index b35113d..2d75927 100644 Binary files a/docs/example_screenshot_brainstorming-2.png and b/docs/example_screenshot_brainstorming-2.png differ diff --git a/docs/example_screenshot_customer-interview-1.png b/docs/example_screenshot_customer-interview-1.png index a792a0c..2b28b65 100644 Binary files a/docs/example_screenshot_customer-interview-1.png and b/docs/example_screenshot_customer-interview-1.png differ diff --git a/docs/example_screenshot_customer-interview-2.png b/docs/example_screenshot_customer-interview-2.png index 4b471b0..f884357 100644 Binary files a/docs/example_screenshot_customer-interview-2.png and b/docs/example_screenshot_customer-interview-2.png differ diff --git a/docs/example_screenshot_tv-ad-2.png b/docs/example_screenshot_tv-ad-2.png index ef97f9a..8385c13 100644 Binary files a/docs/example_screenshot_tv-ad-2.png and b/docs/example_screenshot_tv-ad-2.png differ diff --git a/examples/advertisement_for_tv.ipynb b/examples/advertisement_for_tv.ipynb index d2d2466..de9de94 100644 --- a/examples/advertisement_for_tv.ipynb +++ b/examples/advertisement_for_tv.ipynb @@ -11,7 +11,7 @@ }, { "cell_type": "code", - "execution_count": 1, + "execution_count": null, "metadata": {}, "outputs": [ { @@ -39,9 +39,9 @@ "import tinytroupe\n", "from tinytroupe.agent import TinyPerson\n", "from tinytroupe.examples import create_lisa_the_data_scientist, create_oscar_the_architect\n", - "from tinytroupe.personfactory import TinyPersonFactory\n", + "from tinytroupe.factory import TinyPersonFactory\n", "\n", - "from tinytroupe.extraction import InteractionResultsExtractor" + "from tinytroupe.extraction import ResultsExtractor" ] }, { @@ -473,7 +473,7 @@ }, { "cell_type": "code", - "execution_count": 9, + "execution_count": null, "metadata": {}, "outputs": [ { @@ -495,7 +495,7 @@ } ], "source": [ - "extractor = InteractionResultsExtractor()\n", + "extractor = ResultsExtractor()\n", "\n", "extraction_objective=\"Find the ad the agent chose. Extract the Ad number and title.\"\n", "\n", @@ -1536,7 +1536,7 @@ "metadata": {}, "outputs": [], "source": [ - "extractor = InteractionResultsExtractor()\n", + "extractor = ResultsExtractor()\n", "extraction_objective=\"Find the ad the agent chose. Extract the Ad number and title. Extract only ONE result.\"\n", "\n", "choices =[]\n", diff --git a/examples/config.ini b/examples/config.ini new file mode 100644 index 0000000..b668a1a --- /dev/null +++ b/examples/config.ini @@ -0,0 +1,44 @@ +[OpenAI] +# +# OpenAI or Azure OpenAI Service +# + +# Default options: openai, azure +API_TYPE=openai + +# Check Azure's documentation for updates here: +# https://learn.microsoft.com/en-us/azure/ai-services/openai/chatgpt-quickstart?tabs=command-line&pivots=programming-language-python +AZURE_API_VERSION=2023-05-15 + +# +# Model parameters +# + +MODEL=gpt-4o +MAX_TOKENS=4000 +TEMPERATURE=0.3 +FREQ_PENALTY=0.0 +PRESENCE_PENALTY=0.0 +TIMEOUT=60 +MAX_ATTEMPTS=5 +WAITING_TIME=1 +EXPONENTIAL_BACKOFF_FACTOR=5 + +EMBEDDING_MODEL=text-embedding-3-small + +CACHE_API_CALLS=False +CACHE_FILE_NAME=openai_api_cache.pickle + +MAX_CONTENT_DISPLAY_LENGTH=1024 + +[Simulation] +RAI_HARMFUL_CONTENT_PREVENTION=True +RAI_COPYRIGHT_INFRINGEMENT_PREVENTION=True + + +[Logging] +LOGLEVEL=ERROR +# ERROR +# WARNING +# INFO +# DEBUG \ No newline at end of file diff --git a/examples/create_ad_for_appartment.ipynb b/examples/create_ad_for_appartment.ipynb index 838a575..1ceac86 100644 --- a/examples/create_ad_for_appartment.ipynb +++ b/examples/create_ad_for_appartment.ipynb @@ -11,7 +11,7 @@ }, { "cell_type": "code", - "execution_count": 1, + "execution_count": null, "metadata": {}, "outputs": [ { @@ -38,7 +38,7 @@ }, { "cell_type": "code", - "execution_count": 2, + "execution_count": null, "metadata": {}, "outputs": [], "source": [ @@ -47,7 +47,7 @@ }, { "cell_type": "code", - "execution_count": 3, + "execution_count": null, "metadata": {}, "outputs": [], "source": [ @@ -95,7 +95,7 @@ }, { "cell_type": "code", - "execution_count": 4, + "execution_count": null, "metadata": {}, "outputs": [ { @@ -309,7 +309,7 @@ }, { "cell_type": "code", - "execution_count": 5, + "execution_count": null, "metadata": {}, "outputs": [ { @@ -1329,7 +1329,7 @@ }, { "cell_type": "code", - "execution_count": 12, + "execution_count": null, "metadata": {}, "outputs": [ { @@ -1366,7 +1366,7 @@ }, { "cell_type": "code", - "execution_count": 14, + "execution_count": null, "metadata": {}, "outputs": [], "source": [ diff --git a/examples/creating_and_validating_agents.ipynb b/examples/creating_and_validating_agents.ipynb index b9e434c..63f7e72 100644 --- a/examples/creating_and_validating_agents.ipynb +++ b/examples/creating_and_validating_agents.ipynb @@ -11,7 +11,7 @@ }, { "cell_type": "code", - "execution_count": 1, + "execution_count": null, "metadata": {}, "outputs": [ { @@ -32,10 +32,10 @@ "import tinytroupe\n", "from tinytroupe.agent import TinyPerson\n", "from tinytroupe.environment import TinyWorld, TinySocialNetwork\n", - "from tinytroupe.personfactory import TinyPersonFactory\n", - "from tinytroupe.personchecker import TinyPersonChecker\n", + "from tinytroupe.factory import TinyPersonFactory\n", + "from tinytroupe.validation import TinyPersonValidator\n", "from tinytroupe.extraction import default_extractor as extractor\n", - "from tinytroupe.extraction import InteractionResultsReducer\n", + "from tinytroupe.extraction import ResultsReducer\n", "import tinytroupe.control as control\n", "\n", "import textwrap" @@ -136,7 +136,7 @@ }, { "cell_type": "code", - "execution_count": 6, + "execution_count": null, "metadata": {}, "outputs": [ { @@ -445,7 +445,7 @@ } ], "source": [ - "banker_score, banker_justification = TinyPersonChecker.validate_person(banker, expectations=banker_expectations, include_agent_spec=False, max_content_length=None)" + "banker_score, banker_justification = TinyPersonValidator.validate_person(banker, expectations=banker_expectations, include_agent_spec=False, max_content_length=None)" ] }, { @@ -588,7 +588,7 @@ }, { "cell_type": "code", - "execution_count": 13, + "execution_count": null, "metadata": {}, "outputs": [ { @@ -839,7 +839,7 @@ } ], "source": [ - "score, justification = TinyPersonChecker.validate_person(busy_knowledge_worker, expectations=bkw_expectations, include_agent_spec=False, max_content_length=None)" + "score, justification = TinyPersonValidator.validate_person(busy_knowledge_worker, expectations=bkw_expectations, include_agent_spec=False, max_content_length=None)" ] }, { @@ -900,7 +900,7 @@ }, { "cell_type": "code", - "execution_count": 16, + "execution_count": null, "metadata": {}, "outputs": [ { @@ -1173,7 +1173,7 @@ } ], "source": [ - "wrong_expectations_score, wrong_expectations_justification = TinyPersonChecker.validate_person(busy_knowledge_worker, expectations=banker_expectations, include_agent_spec=False, max_content_length=None)\n" + "wrong_expectations_score, wrong_expectations_justification = TinyPersonValidator.validate_person(busy_knowledge_worker, expectations=banker_expectations, include_agent_spec=False, max_content_length=None)\n" ] }, { diff --git a/examples/interview_with_customer.ipynb b/examples/interview_with_customer.ipynb index dde6ebd..5617fc3 100644 --- a/examples/interview_with_customer.ipynb +++ b/examples/interview_with_customer.ipynb @@ -18,9 +18,43 @@ "name": "stdout", "output_type": "stream", "text": [ - "Failed to find custom config on: c:\\Users\\pdasilva\\OneDrive - Microsoft\\Git repositories\\tinytroupe\\notebooks\\config.ini\n", - "Now switching to default config file...\n", - "Looking for config on: c:\\Users\\pdasilva\\AppData\\Local\\anaconda3\\envs\\py310\\lib\\site-packages\\tinytroupe\\config.ini\n" + "\n", + "!!!!\n", + "DISCLAIMER: TinyTroupe relies on Artificial Intelligence (AI) models to generate content. \n", + "The AI models are not perfect and may produce inappropriate or inacurate results. \n", + "For any serious or consequential use, please review the generated content before using it.\n", + "!!!!\n", + "\n", + "Looking for default config on: c:\\Users\\pdasilva\\OneDrive - Microsoft\\Git repositories\\tinytroupe-opensource\\TinyTroupe\\examples\\..\\tinytroupe\\config.ini\n", + "Found custom config on: c:\\Users\\pdasilva\\OneDrive - Microsoft\\Git repositories\\tinytroupe-opensource\\TinyTroupe\\examples\\config.ini\n", + "\n", + "=================================\n", + "Current TinyTroupe configuration \n", + "=================================\n", + "[OpenAI]\n", + "api_type = openai\n", + "azure_api_version = 2023-05-15\n", + "model = gpt-4o\n", + "max_tokens = 4000\n", + "temperature = 0.3\n", + "freq_penalty = 0.0\n", + "presence_penalty = 0.0\n", + "timeout = 60\n", + "max_attempts = 5\n", + "waiting_time = 1\n", + "exponential_backoff_factor = 5\n", + "embedding_model = text-embedding-3-small\n", + "cache_api_calls = False\n", + "cache_file_name = openai_api_cache.pickle\n", + "max_content_display_length = 1024\n", + "\n", + "[Simulation]\n", + "rai_harmful_content_prevention = True\n", + "rai_copyright_infringement_prevention = True\n", + "\n", + "[Logging]\n", + "loglevel = ERROR\n", + "\n" ] } ], @@ -32,9 +66,9 @@ "import tinytroupe\n", "from tinytroupe.agent import TinyPerson\n", "from tinytroupe.environment import TinyWorld, TinySocialNetwork\n", - "from tinytroupe.personfactory import TinyPersonFactory\n", + "from tinytroupe.factory import TinyPersonFactory\n", "from tinytroupe.extraction import default_extractor as extractor\n", - "from tinytroupe.extraction import InteractionResultsReducer\n", + "from tinytroupe.extraction import ResultsReducer\n", "import tinytroupe.control as control" ] }, @@ -51,13 +85,14 @@ "metadata": {}, "outputs": [], "source": [ - "factory = TinyPersonFactory(\n", + "factory = TinyPersonFactory(\"One of the largest banks in Brazil, full of bureaucracy and legacy systems.\")\n", + "\n", + "customer = factory.generate_person(\n", " \"\"\"\n", - " A vice-president of one of the largest brazillian banks. Has a degree in engineering and a MBA in finance. \n", + " The vice-president of one product innovation. Has a degree in engineering and a MBA in finance. \n", " Is facing a lot of pressure from the board of directors to fight off the competition from the fintechs. \n", - " \"\"\")\n", - "\n", - "customer = factory.generate_person()" + " \"\"\"\n", + ")" ] }, { @@ -68,7 +103,7 @@ { "data": { "text/plain": [ - "'Marcela Ferreira is a 47 year old Vice-President of Finance from Brazil.'" + "'Lucas Almeida is a 42 year old Vice-President of Product Innovation, Brazilian, currently living in Brazil.'" ] }, "execution_count": 3, @@ -95,15 +130,15 @@ { "data": { "text/html": [ - "
    Marcela Ferreira --> Marcela Ferreira: [THOUGHT] I am now talking to a business and technology consultant to help \n",
    -       "me with my\n",
    -       "                              > professional problems.\n",
    +       "
    Lucas Almeida --> Lucas Almeida: [THOUGHT] \n",
    +       "                   > I am now talking to a business and technology consultant to help me with my\n",
    +       "                   > professional problems.\n",
            "
    \n" ], "text/plain": [ - "\u001b[2;3;4;38;5;51mMarcela Ferreira\u001b[0m\u001b[2;3;38;5;51m --> \u001b[0m\u001b[2;3;4;38;5;51mMarcela Ferreira\u001b[0m\u001b[2;3;38;5;51m: \u001b[0m\u001b[1;2;3;38;5;51m[\u001b[0m\u001b[2;3;38;5;51mTHOUGHT\u001b[0m\u001b[1;2;3;38;5;51m]\u001b[0m\u001b[2;3;38;5;51m I am now talking to a business and technology consultant to help \u001b[0m\n", - "\u001b[2;3;38;5;51mme with my\u001b[0m\n", - "\u001b[2;3;38;5;51m > professional problems.\u001b[0m\n" + "\u001b[2;3;4;38;5;51mLucas Almeida\u001b[0m\u001b[2;3;38;5;51m --> \u001b[0m\u001b[2;3;4;38;5;51mLucas Almeida\u001b[0m\u001b[2;3;38;5;51m: \u001b[0m\u001b[1;2;3;38;5;51m[\u001b[0m\u001b[2;3;38;5;51mTHOUGHT\u001b[0m\u001b[1;2;3;38;5;51m]\u001b[0m\u001b[2;3;38;5;51m \u001b[0m\n", + "\u001b[2;3;38;5;51m > I am now talking to a business and technology consultant to help me with my\u001b[0m\n", + "\u001b[2;3;38;5;51m > professional problems.\u001b[0m\n" ] }, "metadata": {}, @@ -112,7 +147,7 @@ { "data": { "text/plain": [ - "TinyPerson(name='Marcela Ferreira')" + "TinyPerson(name='Lucas Almeida')" ] }, "execution_count": 4, @@ -126,19 +161,19 @@ }, { "cell_type": "code", - "execution_count": null, + "execution_count": 5, "metadata": {}, "outputs": [ { "data": { "text/html": [ - "
    USER --> Marcela Ferreira: [CONVERSATION] What would you say are your main problems today? Please be as specific as\n",
    -       "                  > possible.\n",
    +       "
    USER --> Lucas Almeida: [CONVERSATION] \n",
    +       "          > What would you say are your main problems today? Please be as specific as possible.\n",
            "
    \n" ], "text/plain": [ - "\u001b[1;3;4;38;5;51mUSER\u001b[0m\u001b[1;3;38;5;51m --> \u001b[0m\u001b[1;3;4;38;5;51mMarcela Ferreira\u001b[0m\u001b[1;3;38;5;51m: \u001b[0m\u001b[1;3;38;5;51m[\u001b[0m\u001b[1;3;38;5;51mCONVERSATION\u001b[0m\u001b[1;3;38;5;51m]\u001b[0m\u001b[1;3;38;5;51m What would you say are your main problems today? Please be as specific as\u001b[0m\n", - "\u001b[1;3;38;5;51m > possible.\u001b[0m\n" + "\u001b[1;3;4;38;5;51mUSER\u001b[0m\u001b[1;3;38;5;51m --> \u001b[0m\u001b[1;3;4;38;5;51mLucas Almeida\u001b[0m\u001b[1;3;38;5;51m: \u001b[0m\u001b[1;3;38;5;51m[\u001b[0m\u001b[1;3;38;5;51mCONVERSATION\u001b[0m\u001b[1;3;38;5;51m]\u001b[0m\u001b[1;3;38;5;51m \u001b[0m\n", + "\u001b[1;3;38;5;51m > What would you say are your main problems today? Please be as specific as possible.\u001b[0m\n" ] }, "metadata": {}, @@ -147,11 +182,13 @@ { "data": { "text/html": [ - "
    Marcela Ferreira --> Marcela Ferreira: [THOUGHT] I will now act a bit, and then issue DONE.\n",
    +       "
    Lucas Almeida --> Lucas Almeida: [THOUGHT] \n",
    +       "                   > I will now act a bit, and then issue DONE.\n",
            "
    \n" ], "text/plain": [ - "\u001b[2;3;4;38;5;51mMarcela Ferreira\u001b[0m\u001b[2;3;38;5;51m --> \u001b[0m\u001b[2;3;4;38;5;51mMarcela Ferreira\u001b[0m\u001b[2;3;38;5;51m: \u001b[0m\u001b[1;2;3;38;5;51m[\u001b[0m\u001b[2;3;38;5;51mTHOUGHT\u001b[0m\u001b[1;2;3;38;5;51m]\u001b[0m\u001b[2;3;38;5;51m I will now act a bit, and then issue DONE.\u001b[0m\n" + "\u001b[2;3;4;38;5;51mLucas Almeida\u001b[0m\u001b[2;3;38;5;51m --> \u001b[0m\u001b[2;3;4;38;5;51mLucas Almeida\u001b[0m\u001b[2;3;38;5;51m: \u001b[0m\u001b[1;2;3;38;5;51m[\u001b[0m\u001b[2;3;38;5;51mTHOUGHT\u001b[0m\u001b[1;2;3;38;5;51m]\u001b[0m\u001b[2;3;38;5;51m \u001b[0m\n", + "\u001b[2;3;38;5;51m > I will now act a bit, and then issue DONE.\u001b[0m\n" ] }, "metadata": {}, @@ -160,15 +197,13 @@ { "data": { "text/html": [ - "
    Marcela Ferreira acts: [THINK] The main challenges I'm facing are developing strategies to compete with fintech\n",
    -       "                              > companies and ensuring regulatory compliance\n",
    -       "                              > while managing risks.\n",
    +       "
    Lucas Almeida acts: [THINK] \n",
    +       "                   > I need to identify the main challenges I'm facing in my role.\n",
            "
    \n" ], "text/plain": [ - "\u001b[4;32mMarcela Ferreira\u001b[0m\u001b[32m acts: \u001b[0m\u001b[1;32m[\u001b[0m\u001b[32mTHINK\u001b[0m\u001b[1;32m]\u001b[0m\u001b[32m The main challenges I'm facing are developing strategies to compete with fintech\u001b[0m\n", - "\u001b[32m > companies and ensuring regulatory compliance\u001b[0m\n", - "\u001b[32m > while managing risks.\u001b[0m\n" + "\u001b[4;32mLucas Almeida\u001b[0m\u001b[32m acts: \u001b[0m\u001b[1;32m[\u001b[0m\u001b[32mTHINK\u001b[0m\u001b[1;32m]\u001b[0m\u001b[32m \u001b[0m\n", + "\u001b[32m > I need to identify the main challenges I'm facing in my role.\u001b[0m\n" ] }, "metadata": {}, @@ -177,11 +212,13 @@ { "data": { "text/html": [ - "
    Marcela Ferreira --> Marcela Ferreira: [THOUGHT] I will now act a bit, and then issue DONE.\n",
    +       "
    Lucas Almeida --> Lucas Almeida: [THOUGHT] \n",
    +       "                   > I will now act a bit, and then issue DONE.\n",
            "
    \n" ], "text/plain": [ - "\u001b[2;3;4;38;5;51mMarcela Ferreira\u001b[0m\u001b[2;3;38;5;51m --> \u001b[0m\u001b[2;3;4;38;5;51mMarcela Ferreira\u001b[0m\u001b[2;3;38;5;51m: \u001b[0m\u001b[1;2;3;38;5;51m[\u001b[0m\u001b[2;3;38;5;51mTHOUGHT\u001b[0m\u001b[1;2;3;38;5;51m]\u001b[0m\u001b[2;3;38;5;51m I will now act a bit, and then issue DONE.\u001b[0m\n" + "\u001b[2;3;4;38;5;51mLucas Almeida\u001b[0m\u001b[2;3;38;5;51m --> \u001b[0m\u001b[2;3;4;38;5;51mLucas Almeida\u001b[0m\u001b[2;3;38;5;51m: \u001b[0m\u001b[1;2;3;38;5;51m[\u001b[0m\u001b[2;3;38;5;51mTHOUGHT\u001b[0m\u001b[1;2;3;38;5;51m]\u001b[0m\u001b[2;3;38;5;51m \u001b[0m\n", + "\u001b[2;3;38;5;51m > I will now act a bit, and then issue DONE.\u001b[0m\n" ] }, "metadata": {}, @@ -190,21 +227,21 @@ { "data": { "text/html": [ - "
    Marcela Ferreira acts: [TALK] Our main problems revolve around the need to innovate to keep up with fintech\n",
    -       "                              > competitors who are disrupting the traditional\n",
    -       "                              > banking model. We're also focusing on\n",
    -       "                              > maintaining strict regulatory compliance and\n",
    -       "                              > managing risks effectively to ensure the bank's\n",
    -       "                              > profitability and sustainability.\n",
    +       "
    Lucas Almeida acts: [TALK] \n",
    +       "                   > One of the main challenges I'm facing is the pressure to deliver innovative\n",
    +       "                   > solutions that can outpace the competition. Additionally, ensuring that our new\n",
    +       "                   > products align with both customer needs and regulatory requirements is a\n",
    +       "                   > constant challenge. Balancing these demands while leading my team effectively\n",
    +       "                   > is crucial.\n",
            "
    \n" ], "text/plain": [ - "\u001b[1;4;38;5;40mMarcela Ferreira\u001b[0m\u001b[1;38;5;40m acts: \u001b[0m\u001b[1;38;5;40m[\u001b[0m\u001b[1;38;5;40mTALK\u001b[0m\u001b[1;38;5;40m]\u001b[0m\u001b[1;38;5;40m Our main problems revolve around the need to innovate to keep up with fintech\u001b[0m\n", - "\u001b[1;38;5;40m > competitors who are disrupting the traditional\u001b[0m\n", - "\u001b[1;38;5;40m > banking model. We're also focusing on\u001b[0m\n", - "\u001b[1;38;5;40m > maintaining strict regulatory compliance and\u001b[0m\n", - "\u001b[1;38;5;40m > managing risks effectively to ensure the bank's\u001b[0m\n", - "\u001b[1;38;5;40m > profitability and sustainability.\u001b[0m\n" + "\u001b[1;4;38;5;40mLucas Almeida\u001b[0m\u001b[1;38;5;40m acts: \u001b[0m\u001b[1;38;5;40m[\u001b[0m\u001b[1;38;5;40mTALK\u001b[0m\u001b[1;38;5;40m]\u001b[0m\u001b[1;38;5;40m \u001b[0m\n", + "\u001b[1;38;5;40m > One of the main challenges I'm facing is the pressure to deliver innovative\u001b[0m\n", + "\u001b[1;38;5;40m > solutions that can outpace the competition. Additionally, ensuring that our new\u001b[0m\n", + "\u001b[1;38;5;40m > products align with both customer needs and regulatory requirements is a\u001b[0m\n", + "\u001b[1;38;5;40m > constant challenge. Balancing these demands while leading my team effectively\u001b[0m\n", + "\u001b[1;38;5;40m > is crucial.\u001b[0m\n" ] }, "metadata": {}, @@ -213,11 +250,13 @@ { "data": { "text/html": [ - "
    Marcela Ferreira --> Marcela Ferreira: [THOUGHT] I will now act a bit, and then issue DONE.\n",
    +       "
    Lucas Almeida --> Lucas Almeida: [THOUGHT] \n",
    +       "                   > I will now act a bit, and then issue DONE.\n",
            "
    \n" ], "text/plain": [ - "\u001b[2;3;4;38;5;51mMarcela Ferreira\u001b[0m\u001b[2;3;38;5;51m --> \u001b[0m\u001b[2;3;4;38;5;51mMarcela Ferreira\u001b[0m\u001b[2;3;38;5;51m: \u001b[0m\u001b[1;2;3;38;5;51m[\u001b[0m\u001b[2;3;38;5;51mTHOUGHT\u001b[0m\u001b[1;2;3;38;5;51m]\u001b[0m\u001b[2;3;38;5;51m I will now act a bit, and then issue DONE.\u001b[0m\n" + "\u001b[2;3;4;38;5;51mLucas Almeida\u001b[0m\u001b[2;3;38;5;51m --> \u001b[0m\u001b[2;3;4;38;5;51mLucas Almeida\u001b[0m\u001b[2;3;38;5;51m: \u001b[0m\u001b[1;2;3;38;5;51m[\u001b[0m\u001b[2;3;38;5;51mTHOUGHT\u001b[0m\u001b[1;2;3;38;5;51m]\u001b[0m\u001b[2;3;38;5;51m \u001b[0m\n", + "\u001b[2;3;38;5;51m > I will now act a bit, and then issue DONE.\u001b[0m\n" ] }, "metadata": {}, @@ -226,11 +265,13 @@ { "data": { "text/html": [ - "
    Marcela Ferreira acts: [DONE] \n",
    +       "
    Lucas Almeida acts: [DONE] \n",
    +       "\n",
            "
    \n" ], "text/plain": [ - "\u001b[4;38;5;252mMarcela Ferreira\u001b[0m\u001b[38;5;252m acts: \u001b[0m\u001b[1;38;5;252m[\u001b[0m\u001b[38;5;252mDONE\u001b[0m\u001b[1;38;5;252m]\u001b[0m\u001b[38;5;252m \u001b[0m\n" + "\u001b[4;38;5;252mLucas Almeida\u001b[0m\u001b[38;5;252m acts: \u001b[0m\u001b[1;38;5;252m[\u001b[0m\u001b[38;5;252mDONE\u001b[0m\u001b[1;38;5;252m]\u001b[0m\u001b[38;5;252m \u001b[0m\n", + "\n" ] }, "metadata": {}, @@ -250,11 +291,13 @@ { "data": { "text/html": [ - "
    USER --> Marcela Ferreira: [CONVERSATION] Can you elaborate on the fintechs?\n",
    +       "
    USER --> Lucas Almeida: [CONVERSATION] \n",
    +       "          > Can you elaborate on the fintechs?\n",
            "
    \n" ], "text/plain": [ - "\u001b[1;3;4;38;5;51mUSER\u001b[0m\u001b[1;3;38;5;51m --> \u001b[0m\u001b[1;3;4;38;5;51mMarcela Ferreira\u001b[0m\u001b[1;3;38;5;51m: \u001b[0m\u001b[1;3;38;5;51m[\u001b[0m\u001b[1;3;38;5;51mCONVERSATION\u001b[0m\u001b[1;3;38;5;51m]\u001b[0m\u001b[1;3;38;5;51m Can you elaborate on the fintechs?\u001b[0m\n" + "\u001b[1;3;4;38;5;51mUSER\u001b[0m\u001b[1;3;38;5;51m --> \u001b[0m\u001b[1;3;4;38;5;51mLucas Almeida\u001b[0m\u001b[1;3;38;5;51m: \u001b[0m\u001b[1;3;38;5;51m[\u001b[0m\u001b[1;3;38;5;51mCONVERSATION\u001b[0m\u001b[1;3;38;5;51m]\u001b[0m\u001b[1;3;38;5;51m \u001b[0m\n", + "\u001b[1;3;38;5;51m > Can you elaborate on the fintechs?\u001b[0m\n" ] }, "metadata": {}, @@ -263,11 +306,13 @@ { "data": { "text/html": [ - "
    Marcela Ferreira --> Marcela Ferreira: [THOUGHT] I will now act a bit, and then issue DONE.\n",
    +       "
    Lucas Almeida --> Lucas Almeida: [THOUGHT] \n",
    +       "                   > I will now act a bit, and then issue DONE.\n",
            "
    \n" ], "text/plain": [ - "\u001b[2;3;4;38;5;51mMarcela Ferreira\u001b[0m\u001b[2;3;38;5;51m --> \u001b[0m\u001b[2;3;4;38;5;51mMarcela Ferreira\u001b[0m\u001b[2;3;38;5;51m: \u001b[0m\u001b[1;2;3;38;5;51m[\u001b[0m\u001b[2;3;38;5;51mTHOUGHT\u001b[0m\u001b[1;2;3;38;5;51m]\u001b[0m\u001b[2;3;38;5;51m I will now act a bit, and then issue DONE.\u001b[0m\n" + "\u001b[2;3;4;38;5;51mLucas Almeida\u001b[0m\u001b[2;3;38;5;51m --> \u001b[0m\u001b[2;3;4;38;5;51mLucas Almeida\u001b[0m\u001b[2;3;38;5;51m: \u001b[0m\u001b[1;2;3;38;5;51m[\u001b[0m\u001b[2;3;38;5;51mTHOUGHT\u001b[0m\u001b[1;2;3;38;5;51m]\u001b[0m\u001b[2;3;38;5;51m \u001b[0m\n", + "\u001b[2;3;38;5;51m > I will now act a bit, and then issue DONE.\u001b[0m\n" ] }, "metadata": {}, @@ -276,19 +321,13 @@ { "data": { "text/html": [ - "
    Marcela Ferreira acts: [THINK] Fintech companies are using technology to offer financial services in more\n",
    -       "                              > efficient and user-friendly ways, which attracts\n",
    -       "                              > our traditional customer base. They often\n",
    -       "                              > operate with lower overhead and can be more\n",
    -       "                              > agile in their operations and offerings.\n",
    +       "
    Lucas Almeida acts: [THINK] \n",
    +       "                   > I need to elaborate on the challenges posed by fintechs.\n",
            "
    \n" ], "text/plain": [ - "\u001b[4;32mMarcela Ferreira\u001b[0m\u001b[32m acts: \u001b[0m\u001b[1;32m[\u001b[0m\u001b[32mTHINK\u001b[0m\u001b[1;32m]\u001b[0m\u001b[32m Fintech companies are using technology to offer financial services in more\u001b[0m\n", - "\u001b[32m > efficient and user-friendly ways, which attracts\u001b[0m\n", - "\u001b[32m > our traditional customer base. They often\u001b[0m\n", - "\u001b[32m > operate with lower overhead and can be more\u001b[0m\n", - "\u001b[32m > agile in their operations and offerings.\u001b[0m\n" + "\u001b[4;32mLucas Almeida\u001b[0m\u001b[32m acts: \u001b[0m\u001b[1;32m[\u001b[0m\u001b[32mTHINK\u001b[0m\u001b[1;32m]\u001b[0m\u001b[32m \u001b[0m\n", + "\u001b[32m > I need to elaborate on the challenges posed by fintechs.\u001b[0m\n" ] }, "metadata": {}, @@ -297,11 +336,13 @@ { "data": { "text/html": [ - "
    Marcela Ferreira --> Marcela Ferreira: [THOUGHT] I will now act a bit, and then issue DONE.\n",
    +       "
    Lucas Almeida --> Lucas Almeida: [THOUGHT] \n",
    +       "                   > I will now act a bit, and then issue DONE.\n",
            "
    \n" ], "text/plain": [ - "\u001b[2;3;4;38;5;51mMarcela Ferreira\u001b[0m\u001b[2;3;38;5;51m --> \u001b[0m\u001b[2;3;4;38;5;51mMarcela Ferreira\u001b[0m\u001b[2;3;38;5;51m: \u001b[0m\u001b[1;2;3;38;5;51m[\u001b[0m\u001b[2;3;38;5;51mTHOUGHT\u001b[0m\u001b[1;2;3;38;5;51m]\u001b[0m\u001b[2;3;38;5;51m I will now act a bit, and then issue DONE.\u001b[0m\n" + "\u001b[2;3;4;38;5;51mLucas Almeida\u001b[0m\u001b[2;3;38;5;51m --> \u001b[0m\u001b[2;3;4;38;5;51mLucas Almeida\u001b[0m\u001b[2;3;38;5;51m: \u001b[0m\u001b[1;2;3;38;5;51m[\u001b[0m\u001b[2;3;38;5;51mTHOUGHT\u001b[0m\u001b[1;2;3;38;5;51m]\u001b[0m\u001b[2;3;38;5;51m \u001b[0m\n", + "\u001b[2;3;38;5;51m > I will now act a bit, and then issue DONE.\u001b[0m\n" ] }, "metadata": {}, @@ -310,27 +351,25 @@ { "data": { "text/html": [ - "
    Marcela Ferreira acts: [TALK] Fintechs are leveraging cutting-edge technologies to streamline financial\n",
    -       "                              > services, making them more accessible and often\n",
    -       "                              > at a lower cost than traditional banks. They\n",
    -       "                              > excel in customer experience and innovation,\n",
    -       "                              > often outpacing us in areas like mobile banking,\n",
    -       "                              > peer-to-peer payments, and personalized\n",
    -       "                              > financial advice. Their agility and lack of\n",
    -       "                              > legacy systems allow them to adapt quickly to\n",
    -       "                              > market changes and customer needs.\n",
    +       "
    Lucas Almeida acts: [TALK] \n",
    +       "                   > Fintechs are rapidly changing the landscape of the financial industry by\n",
    +       "                   > introducing innovative technologies and business models. They often have the\n",
    +       "                   > advantage of being more agile and customer-focused, which can be challenging\n",
    +       "                   > for traditional banks to compete with. My role involves exploring partnerships\n",
    +       "                   > with these startups to integrate new technologies and developing digital\n",
    +       "                   > banking solutions that enhance customer experience. Staying ahead of regulatory\n",
    +       "                   > changes that impact product innovation is also crucial.\n",
            "
    \n" ], "text/plain": [ - "\u001b[1;4;38;5;40mMarcela Ferreira\u001b[0m\u001b[1;38;5;40m acts: \u001b[0m\u001b[1;38;5;40m[\u001b[0m\u001b[1;38;5;40mTALK\u001b[0m\u001b[1;38;5;40m]\u001b[0m\u001b[1;38;5;40m Fintechs are leveraging cutting-edge technologies to streamline financial\u001b[0m\n", - "\u001b[1;38;5;40m > services, making them more accessible and often\u001b[0m\n", - "\u001b[1;38;5;40m > at a lower cost than traditional banks. They\u001b[0m\n", - "\u001b[1;38;5;40m > excel in customer experience and innovation,\u001b[0m\n", - "\u001b[1;38;5;40m > often outpacing us in areas like mobile banking,\u001b[0m\n", - "\u001b[1;38;5;40m > peer-to-peer payments, and personalized\u001b[0m\n", - "\u001b[1;38;5;40m > financial advice. Their agility and lack of\u001b[0m\n", - "\u001b[1;38;5;40m > legacy systems allow them to adapt quickly to\u001b[0m\n", - "\u001b[1;38;5;40m > market changes and customer needs.\u001b[0m\n" + "\u001b[1;4;38;5;40mLucas Almeida\u001b[0m\u001b[1;38;5;40m acts: \u001b[0m\u001b[1;38;5;40m[\u001b[0m\u001b[1;38;5;40mTALK\u001b[0m\u001b[1;38;5;40m]\u001b[0m\u001b[1;38;5;40m \u001b[0m\n", + "\u001b[1;38;5;40m > Fintechs are rapidly changing the landscape of the financial industry by\u001b[0m\n", + "\u001b[1;38;5;40m > introducing innovative technologies and business models. They often have the\u001b[0m\n", + "\u001b[1;38;5;40m > advantage of being more agile and customer-focused, which can be challenging\u001b[0m\n", + "\u001b[1;38;5;40m > for traditional banks to compete with. My role involves exploring partnerships\u001b[0m\n", + "\u001b[1;38;5;40m > with these startups to integrate new technologies and developing digital\u001b[0m\n", + "\u001b[1;38;5;40m > banking solutions that enhance customer experience. Staying ahead of regulatory\u001b[0m\n", + "\u001b[1;38;5;40m > changes that impact product innovation is also crucial.\u001b[0m\n" ] }, "metadata": {}, @@ -339,11 +378,13 @@ { "data": { "text/html": [ - "
    Marcela Ferreira --> Marcela Ferreira: [THOUGHT] I will now act a bit, and then issue DONE.\n",
    +       "
    Lucas Almeida --> Lucas Almeida: [THOUGHT] \n",
    +       "                   > I will now act a bit, and then issue DONE.\n",
            "
    \n" ], "text/plain": [ - "\u001b[2;3;4;38;5;51mMarcela Ferreira\u001b[0m\u001b[2;3;38;5;51m --> \u001b[0m\u001b[2;3;4;38;5;51mMarcela Ferreira\u001b[0m\u001b[2;3;38;5;51m: \u001b[0m\u001b[1;2;3;38;5;51m[\u001b[0m\u001b[2;3;38;5;51mTHOUGHT\u001b[0m\u001b[1;2;3;38;5;51m]\u001b[0m\u001b[2;3;38;5;51m I will now act a bit, and then issue DONE.\u001b[0m\n" + "\u001b[2;3;4;38;5;51mLucas Almeida\u001b[0m\u001b[2;3;38;5;51m --> \u001b[0m\u001b[2;3;4;38;5;51mLucas Almeida\u001b[0m\u001b[2;3;38;5;51m: \u001b[0m\u001b[1;2;3;38;5;51m[\u001b[0m\u001b[2;3;38;5;51mTHOUGHT\u001b[0m\u001b[1;2;3;38;5;51m]\u001b[0m\u001b[2;3;38;5;51m \u001b[0m\n", + "\u001b[2;3;38;5;51m > I will now act a bit, and then issue DONE.\u001b[0m\n" ] }, "metadata": {}, @@ -352,11 +393,13 @@ { "data": { "text/html": [ - "
    Marcela Ferreira acts: [DONE] \n",
    +       "
    Lucas Almeida acts: [DONE] \n",
    +       "\n",
            "
    \n" ], "text/plain": [ - "\u001b[4;38;5;252mMarcela Ferreira\u001b[0m\u001b[38;5;252m acts: \u001b[0m\u001b[1;38;5;252m[\u001b[0m\u001b[38;5;252mDONE\u001b[0m\u001b[1;38;5;252m]\u001b[0m\u001b[38;5;252m \u001b[0m\n" + "\u001b[4;38;5;252mLucas Almeida\u001b[0m\u001b[38;5;252m acts: \u001b[0m\u001b[1;38;5;252m[\u001b[0m\u001b[38;5;252mDONE\u001b[0m\u001b[1;38;5;252m]\u001b[0m\u001b[38;5;252m \u001b[0m\n", + "\n" ] }, "metadata": {}, @@ -375,15 +418,13 @@ { "data": { "text/html": [ - "
    USER --> Marcela Ferreira: [CONVERSATION] If you could improve in one of these aspects to better compete, what \n",
    -       "would that\n",
    -       "                  > be?\n",
    +       "
    USER --> Lucas Almeida: [CONVERSATION] \n",
    +       "          > If you could improve in one of these aspects to better compete, what would that be?\n",
            "
    \n" ], "text/plain": [ - "\u001b[1;3;4;38;5;51mUSER\u001b[0m\u001b[1;3;38;5;51m --> \u001b[0m\u001b[1;3;4;38;5;51mMarcela Ferreira\u001b[0m\u001b[1;3;38;5;51m: \u001b[0m\u001b[1;3;38;5;51m[\u001b[0m\u001b[1;3;38;5;51mCONVERSATION\u001b[0m\u001b[1;3;38;5;51m]\u001b[0m\u001b[1;3;38;5;51m If you could improve in one of these aspects to better compete, what \u001b[0m\n", - "\u001b[1;3;38;5;51mwould that\u001b[0m\n", - "\u001b[1;3;38;5;51m > be?\u001b[0m\n" + "\u001b[1;3;4;38;5;51mUSER\u001b[0m\u001b[1;3;38;5;51m --> \u001b[0m\u001b[1;3;4;38;5;51mLucas Almeida\u001b[0m\u001b[1;3;38;5;51m: \u001b[0m\u001b[1;3;38;5;51m[\u001b[0m\u001b[1;3;38;5;51mCONVERSATION\u001b[0m\u001b[1;3;38;5;51m]\u001b[0m\u001b[1;3;38;5;51m \u001b[0m\n", + "\u001b[1;3;38;5;51m > If you could improve in one of these aspects to better compete, what would that be?\u001b[0m\n" ] }, "metadata": {}, @@ -392,11 +433,13 @@ { "data": { "text/html": [ - "
    Marcela Ferreira --> Marcela Ferreira: [THOUGHT] I will now act a bit, and then issue DONE.\n",
    +       "
    Lucas Almeida --> Lucas Almeida: [THOUGHT] \n",
    +       "                   > I will now act a bit, and then issue DONE.\n",
            "
    \n" ], "text/plain": [ - "\u001b[2;3;4;38;5;51mMarcela Ferreira\u001b[0m\u001b[2;3;38;5;51m --> \u001b[0m\u001b[2;3;4;38;5;51mMarcela Ferreira\u001b[0m\u001b[2;3;38;5;51m: \u001b[0m\u001b[1;2;3;38;5;51m[\u001b[0m\u001b[2;3;38;5;51mTHOUGHT\u001b[0m\u001b[1;2;3;38;5;51m]\u001b[0m\u001b[2;3;38;5;51m I will now act a bit, and then issue DONE.\u001b[0m\n" + "\u001b[2;3;4;38;5;51mLucas Almeida\u001b[0m\u001b[2;3;38;5;51m --> \u001b[0m\u001b[2;3;4;38;5;51mLucas Almeida\u001b[0m\u001b[2;3;38;5;51m: \u001b[0m\u001b[1;2;3;38;5;51m[\u001b[0m\u001b[2;3;38;5;51mTHOUGHT\u001b[0m\u001b[1;2;3;38;5;51m]\u001b[0m\u001b[2;3;38;5;51m \u001b[0m\n", + "\u001b[2;3;38;5;51m > I will now act a bit, and then issue DONE.\u001b[0m\n" ] }, "metadata": {}, @@ -405,17 +448,15 @@ { "data": { "text/html": [ - "
    Marcela Ferreira acts: [THINK] Improving our digital platform to enhance customer experience would be a key\n",
    -       "                              > area. This includes investing in technology to\n",
    -       "                              > provide more personalized services and user-\n",
    -       "                              > friendly interfaces.\n",
    +       "
    Lucas Almeida acts: [THINK] \n",
    +       "                   > I need to consider which aspect of my role I would most like to improve to\n",
    +       "                   > better compete with fintechs.\n",
            "
    \n" ], "text/plain": [ - "\u001b[4;32mMarcela Ferreira\u001b[0m\u001b[32m acts: \u001b[0m\u001b[1;32m[\u001b[0m\u001b[32mTHINK\u001b[0m\u001b[1;32m]\u001b[0m\u001b[32m Improving our digital platform to enhance customer experience would be a key\u001b[0m\n", - "\u001b[32m > area. This includes investing in technology to\u001b[0m\n", - "\u001b[32m > provide more personalized services and user-\u001b[0m\n", - "\u001b[32m > friendly interfaces.\u001b[0m\n" + "\u001b[4;32mLucas Almeida\u001b[0m\u001b[32m acts: \u001b[0m\u001b[1;32m[\u001b[0m\u001b[32mTHINK\u001b[0m\u001b[1;32m]\u001b[0m\u001b[32m \u001b[0m\n", + "\u001b[32m > I need to consider which aspect of my role I would most like to improve to\u001b[0m\n", + "\u001b[32m > better compete with fintechs.\u001b[0m\n" ] }, "metadata": {}, @@ -424,11 +465,13 @@ { "data": { "text/html": [ - "
    Marcela Ferreira --> Marcela Ferreira: [THOUGHT] I will now act a bit, and then issue DONE.\n",
    +       "
    Lucas Almeida --> Lucas Almeida: [THOUGHT] \n",
    +       "                   > I will now act a bit, and then issue DONE.\n",
            "
    \n" ], "text/plain": [ - "\u001b[2;3;4;38;5;51mMarcela Ferreira\u001b[0m\u001b[2;3;38;5;51m --> \u001b[0m\u001b[2;3;4;38;5;51mMarcela Ferreira\u001b[0m\u001b[2;3;38;5;51m: \u001b[0m\u001b[1;2;3;38;5;51m[\u001b[0m\u001b[2;3;38;5;51mTHOUGHT\u001b[0m\u001b[1;2;3;38;5;51m]\u001b[0m\u001b[2;3;38;5;51m I will now act a bit, and then issue DONE.\u001b[0m\n" + "\u001b[2;3;4;38;5;51mLucas Almeida\u001b[0m\u001b[2;3;38;5;51m --> \u001b[0m\u001b[2;3;4;38;5;51mLucas Almeida\u001b[0m\u001b[2;3;38;5;51m: \u001b[0m\u001b[1;2;3;38;5;51m[\u001b[0m\u001b[2;3;38;5;51mTHOUGHT\u001b[0m\u001b[1;2;3;38;5;51m]\u001b[0m\u001b[2;3;38;5;51m \u001b[0m\n", + "\u001b[2;3;38;5;51m > I will now act a bit, and then issue DONE.\u001b[0m\n" ] }, "metadata": {}, @@ -437,25 +480,19 @@ { "data": { "text/html": [ - "
    Marcela Ferreira acts: [TALK] I believe the most critical area for improvement would be our digital platform.\n",
    -       "                              > Enhancing the customer experience through a more\n",
    -       "                              > robust, secure, and user-friendly online banking\n",
    -       "                              > system would be pivotal. We need to focus on\n",
    -       "                              > personalization, leveraging data analytics to\n",
    -       "                              > offer tailored financial advice and products. By\n",
    -       "                              > doing so, we can regain a competitive edge and\n",
    -       "                              > meet the evolving expectations of our customers.\n",
    +       "
    Lucas Almeida acts: [TALK] \n",
    +       "                   > If I could improve in one aspect, it would be enhancing our agility in product\n",
    +       "                   > development. Being able to quickly adapt and implement new technologies and\n",
    +       "                   > ideas would allow us to better compete with fintechs, who often have the\n",
    +       "                   > advantage of speed and flexibility.\n",
            "
    \n" ], "text/plain": [ - "\u001b[1;4;38;5;40mMarcela Ferreira\u001b[0m\u001b[1;38;5;40m acts: \u001b[0m\u001b[1;38;5;40m[\u001b[0m\u001b[1;38;5;40mTALK\u001b[0m\u001b[1;38;5;40m]\u001b[0m\u001b[1;38;5;40m I believe the most critical area for improvement would be our digital platform.\u001b[0m\n", - "\u001b[1;38;5;40m > Enhancing the customer experience through a more\u001b[0m\n", - "\u001b[1;38;5;40m > robust, secure, and user-friendly online banking\u001b[0m\n", - "\u001b[1;38;5;40m > system would be pivotal. We need to focus on\u001b[0m\n", - "\u001b[1;38;5;40m > personalization, leveraging data analytics to\u001b[0m\n", - "\u001b[1;38;5;40m > offer tailored financial advice and products. By\u001b[0m\n", - "\u001b[1;38;5;40m > doing so, we can regain a competitive edge and\u001b[0m\n", - "\u001b[1;38;5;40m > meet the evolving expectations of our customers.\u001b[0m\n" + "\u001b[1;4;38;5;40mLucas Almeida\u001b[0m\u001b[1;38;5;40m acts: \u001b[0m\u001b[1;38;5;40m[\u001b[0m\u001b[1;38;5;40mTALK\u001b[0m\u001b[1;38;5;40m]\u001b[0m\u001b[1;38;5;40m \u001b[0m\n", + "\u001b[1;38;5;40m > If I could improve in one aspect, it would be enhancing our agility in product\u001b[0m\n", + "\u001b[1;38;5;40m > development. Being able to quickly adapt and implement new technologies and\u001b[0m\n", + "\u001b[1;38;5;40m > ideas would allow us to better compete with fintechs, who often have the\u001b[0m\n", + "\u001b[1;38;5;40m > advantage of speed and flexibility.\u001b[0m\n" ] }, "metadata": {}, @@ -464,11 +501,13 @@ { "data": { "text/html": [ - "
    Marcela Ferreira --> Marcela Ferreira: [THOUGHT] I will now act a bit, and then issue DONE.\n",
    +       "
    Lucas Almeida --> Lucas Almeida: [THOUGHT] \n",
    +       "                   > I will now act a bit, and then issue DONE.\n",
            "
    \n" ], "text/plain": [ - "\u001b[2;3;4;38;5;51mMarcela Ferreira\u001b[0m\u001b[2;3;38;5;51m --> \u001b[0m\u001b[2;3;4;38;5;51mMarcela Ferreira\u001b[0m\u001b[2;3;38;5;51m: \u001b[0m\u001b[1;2;3;38;5;51m[\u001b[0m\u001b[2;3;38;5;51mTHOUGHT\u001b[0m\u001b[1;2;3;38;5;51m]\u001b[0m\u001b[2;3;38;5;51m I will now act a bit, and then issue DONE.\u001b[0m\n" + "\u001b[2;3;4;38;5;51mLucas Almeida\u001b[0m\u001b[2;3;38;5;51m --> \u001b[0m\u001b[2;3;4;38;5;51mLucas Almeida\u001b[0m\u001b[2;3;38;5;51m: \u001b[0m\u001b[1;2;3;38;5;51m[\u001b[0m\u001b[2;3;38;5;51mTHOUGHT\u001b[0m\u001b[1;2;3;38;5;51m]\u001b[0m\u001b[2;3;38;5;51m \u001b[0m\n", + "\u001b[2;3;38;5;51m > I will now act a bit, and then issue DONE.\u001b[0m\n" ] }, "metadata": {}, @@ -477,11 +516,13 @@ { "data": { "text/html": [ - "
    Marcela Ferreira acts: [DONE] \n",
    +       "
    Lucas Almeida acts: [DONE] \n",
    +       "\n",
            "
    \n" ], "text/plain": [ - "\u001b[4;38;5;252mMarcela Ferreira\u001b[0m\u001b[38;5;252m acts: \u001b[0m\u001b[1;38;5;252m[\u001b[0m\u001b[38;5;252mDONE\u001b[0m\u001b[1;38;5;252m]\u001b[0m\u001b[38;5;252m \u001b[0m\n" + "\u001b[4;38;5;252mLucas Almeida\u001b[0m\u001b[38;5;252m acts: \u001b[0m\u001b[1;38;5;252m[\u001b[0m\u001b[38;5;252mDONE\u001b[0m\u001b[1;38;5;252m]\u001b[0m\u001b[38;5;252m \u001b[0m\n", + "\n" ] }, "metadata": {}, @@ -494,21 +535,21 @@ }, { "cell_type": "code", - "execution_count": null, + "execution_count": 8, "metadata": {}, "outputs": [ { "data": { "text/html": [ - "
    USER --> Marcela Ferreira: [CONVERSATION] Please give more detail about that, so that we can think about a project \n",
    -       "to\n",
    -       "                  > pursue this direction.\n",
    +       "
    USER --> Lucas Almeida: [CONVERSATION] \n",
    +       "          > Please give more detail about that, so that we can think about a project to pursue this\n",
    +       "          > direction.\n",
            "
    \n" ], "text/plain": [ - "\u001b[1;3;4;38;5;51mUSER\u001b[0m\u001b[1;3;38;5;51m --> \u001b[0m\u001b[1;3;4;38;5;51mMarcela Ferreira\u001b[0m\u001b[1;3;38;5;51m: \u001b[0m\u001b[1;3;38;5;51m[\u001b[0m\u001b[1;3;38;5;51mCONVERSATION\u001b[0m\u001b[1;3;38;5;51m]\u001b[0m\u001b[1;3;38;5;51m Please give more detail about that, so that we can think about a project \u001b[0m\n", - "\u001b[1;3;38;5;51mto\u001b[0m\n", - "\u001b[1;3;38;5;51m > pursue this direction.\u001b[0m\n" + "\u001b[1;3;4;38;5;51mUSER\u001b[0m\u001b[1;3;38;5;51m --> \u001b[0m\u001b[1;3;4;38;5;51mLucas Almeida\u001b[0m\u001b[1;3;38;5;51m: \u001b[0m\u001b[1;3;38;5;51m[\u001b[0m\u001b[1;3;38;5;51mCONVERSATION\u001b[0m\u001b[1;3;38;5;51m]\u001b[0m\u001b[1;3;38;5;51m \u001b[0m\n", + "\u001b[1;3;38;5;51m > Please give more detail about that, so that we can think about a project to pursue this\u001b[0m\n", + "\u001b[1;3;38;5;51m > direction.\u001b[0m\n" ] }, "metadata": {}, @@ -517,11 +558,13 @@ { "data": { "text/html": [ - "
    Marcela Ferreira --> Marcela Ferreira: [THOUGHT] I will now act a bit, and then issue DONE.\n",
    +       "
    Lucas Almeida --> Lucas Almeida: [THOUGHT] \n",
    +       "                   > I will now act a bit, and then issue DONE.\n",
            "
    \n" ], "text/plain": [ - "\u001b[2;3;4;38;5;51mMarcela Ferreira\u001b[0m\u001b[2;3;38;5;51m --> \u001b[0m\u001b[2;3;4;38;5;51mMarcela Ferreira\u001b[0m\u001b[2;3;38;5;51m: \u001b[0m\u001b[1;2;3;38;5;51m[\u001b[0m\u001b[2;3;38;5;51mTHOUGHT\u001b[0m\u001b[1;2;3;38;5;51m]\u001b[0m\u001b[2;3;38;5;51m I will now act a bit, and then issue DONE.\u001b[0m\n" + "\u001b[2;3;4;38;5;51mLucas Almeida\u001b[0m\u001b[2;3;38;5;51m --> \u001b[0m\u001b[2;3;4;38;5;51mLucas Almeida\u001b[0m\u001b[2;3;38;5;51m: \u001b[0m\u001b[1;2;3;38;5;51m[\u001b[0m\u001b[2;3;38;5;51mTHOUGHT\u001b[0m\u001b[1;2;3;38;5;51m]\u001b[0m\u001b[2;3;38;5;51m \u001b[0m\n", + "\u001b[2;3;38;5;51m > I will now act a bit, and then issue DONE.\u001b[0m\n" ] }, "metadata": {}, @@ -530,21 +573,15 @@ { "data": { "text/html": [ - "
    Marcela Ferreira acts: [THINK] To elaborate, we need to invest in a state-of-the-art IT infrastructure that\n",
    -       "                              > supports advanced data analytics for customer\n",
    -       "                              > insights. We should also develop a more\n",
    -       "                              > intuitive and seamless multi-channel experience,\n",
    -       "                              > integrating mobile, online, and in-branch\n",
    -       "                              > services.\n",
    +       "
    Lucas Almeida acts: [THINK] \n",
    +       "                   > I need to provide more details on how we can enhance our agility in product\n",
    +       "                   > development.\n",
            "
    \n" ], "text/plain": [ - "\u001b[4;32mMarcela Ferreira\u001b[0m\u001b[32m acts: \u001b[0m\u001b[1;32m[\u001b[0m\u001b[32mTHINK\u001b[0m\u001b[1;32m]\u001b[0m\u001b[32m To elaborate, we need to invest in a state-of-the-art IT infrastructure that\u001b[0m\n", - "\u001b[32m > supports advanced data analytics for customer\u001b[0m\n", - "\u001b[32m > insights. We should also develop a more\u001b[0m\n", - "\u001b[32m > intuitive and seamless multi-channel experience,\u001b[0m\n", - "\u001b[32m > integrating mobile, online, and in-branch\u001b[0m\n", - "\u001b[32m > services.\u001b[0m\n" + "\u001b[4;32mLucas Almeida\u001b[0m\u001b[32m acts: \u001b[0m\u001b[1;32m[\u001b[0m\u001b[32mTHINK\u001b[0m\u001b[1;32m]\u001b[0m\u001b[32m \u001b[0m\n", + "\u001b[32m > I need to provide more details on how we can enhance our agility in product\u001b[0m\n", + "\u001b[32m > development.\u001b[0m\n" ] }, "metadata": {}, @@ -553,11 +590,13 @@ { "data": { "text/html": [ - "
    Marcela Ferreira --> Marcela Ferreira: [THOUGHT] I will now act a bit, and then issue DONE.\n",
    +       "
    Lucas Almeida --> Lucas Almeida: [THOUGHT] \n",
    +       "                   > I will now act a bit, and then issue DONE.\n",
            "
    \n" ], "text/plain": [ - "\u001b[2;3;4;38;5;51mMarcela Ferreira\u001b[0m\u001b[2;3;38;5;51m --> \u001b[0m\u001b[2;3;4;38;5;51mMarcela Ferreira\u001b[0m\u001b[2;3;38;5;51m: \u001b[0m\u001b[1;2;3;38;5;51m[\u001b[0m\u001b[2;3;38;5;51mTHOUGHT\u001b[0m\u001b[1;2;3;38;5;51m]\u001b[0m\u001b[2;3;38;5;51m I will now act a bit, and then issue DONE.\u001b[0m\n" + "\u001b[2;3;4;38;5;51mLucas Almeida\u001b[0m\u001b[2;3;38;5;51m --> \u001b[0m\u001b[2;3;4;38;5;51mLucas Almeida\u001b[0m\u001b[2;3;38;5;51m: \u001b[0m\u001b[1;2;3;38;5;51m[\u001b[0m\u001b[2;3;38;5;51mTHOUGHT\u001b[0m\u001b[1;2;3;38;5;51m]\u001b[0m\u001b[2;3;38;5;51m \u001b[0m\n", + "\u001b[2;3;38;5;51m > I will now act a bit, and then issue DONE.\u001b[0m\n" ] }, "metadata": {}, @@ -566,41 +605,27 @@ { "data": { "text/html": [ - "
    Marcela Ferreira acts: [TALK] Certainly. A detailed project would involve creating a comprehensive digital\n",
    -       "                              > transformation plan. This would include\n",
    -       "                              > upgrading our core banking system to allow for\n",
    -       "                              > real-time processing and better integration with\n",
    -       "                              > new technologies. We'd also look into adopting\n",
    -       "                              > cloud services for scalability and implementing\n",
    -       "                              > advanced cybersecurity measures. On the\n",
    -       "                              > customer-facing side, we'd redesign our online\n",
    -       "                              > and mobile banking platforms to offer a more\n",
    -       "                              > personalized and engaging user experience, with\n",
    -       "                              > features like financial management tools,\n",
    -       "                              > customized product recommendations, and enhanced\n",
    -       "                              > support via chatbots and virtual assistants. The\n",
    -       "                              > goal is to create a seamless and secure digital\n",
    -       "                              > ecosystem that meets the needs of our tech-savvy\n",
    -       "                              > customers.\n",
    +       "
    Lucas Almeida acts: [TALK] \n",
    +       "                   > To enhance our agility in product development, we could focus on implementing\n",
    +       "                   > agile methodologies more thoroughly across our teams. This would involve\n",
    +       "                   > regular sprints, continuous feedback loops, and cross-functional collaboration\n",
    +       "                   > to ensure that we can quickly adapt to changes and deliver products that meet\n",
    +       "                   > customer needs. Additionally, investing in training for our team to improve\n",
    +       "                   > their skills in agile practices and tools could be beneficial. We could also\n",
    +       "                   > explore partnerships with fintechs to learn from their agile processes and\n",
    +       "                   > potentially integrate some of their practices into our own.\n",
            "
    \n" ], "text/plain": [ - "\u001b[1;4;38;5;40mMarcela Ferreira\u001b[0m\u001b[1;38;5;40m acts: \u001b[0m\u001b[1;38;5;40m[\u001b[0m\u001b[1;38;5;40mTALK\u001b[0m\u001b[1;38;5;40m]\u001b[0m\u001b[1;38;5;40m Certainly. A detailed project would involve creating a comprehensive digital\u001b[0m\n", - "\u001b[1;38;5;40m > transformation plan. This would include\u001b[0m\n", - "\u001b[1;38;5;40m > upgrading our core banking system to allow for\u001b[0m\n", - "\u001b[1;38;5;40m > real-time processing and better integration with\u001b[0m\n", - "\u001b[1;38;5;40m > new technologies. We'd also look into adopting\u001b[0m\n", - "\u001b[1;38;5;40m > cloud services for scalability and implementing\u001b[0m\n", - "\u001b[1;38;5;40m > advanced cybersecurity measures. On the\u001b[0m\n", - "\u001b[1;38;5;40m > customer-facing side, we'd redesign our online\u001b[0m\n", - "\u001b[1;38;5;40m > and mobile banking platforms to offer a more\u001b[0m\n", - "\u001b[1;38;5;40m > personalized and engaging user experience, with\u001b[0m\n", - "\u001b[1;38;5;40m > features like financial management tools,\u001b[0m\n", - "\u001b[1;38;5;40m > customized product recommendations, and enhanced\u001b[0m\n", - "\u001b[1;38;5;40m > support via chatbots and virtual assistants. The\u001b[0m\n", - "\u001b[1;38;5;40m > goal is to create a seamless and secure digital\u001b[0m\n", - "\u001b[1;38;5;40m > ecosystem that meets the needs of our tech-savvy\u001b[0m\n", - "\u001b[1;38;5;40m > customers.\u001b[0m\n" + "\u001b[1;4;38;5;40mLucas Almeida\u001b[0m\u001b[1;38;5;40m acts: \u001b[0m\u001b[1;38;5;40m[\u001b[0m\u001b[1;38;5;40mTALK\u001b[0m\u001b[1;38;5;40m]\u001b[0m\u001b[1;38;5;40m \u001b[0m\n", + "\u001b[1;38;5;40m > To enhance our agility in product development, we could focus on implementing\u001b[0m\n", + "\u001b[1;38;5;40m > agile methodologies more thoroughly across our teams. This would involve\u001b[0m\n", + "\u001b[1;38;5;40m > regular sprints, continuous feedback loops, and cross-functional collaboration\u001b[0m\n", + "\u001b[1;38;5;40m > to ensure that we can quickly adapt to changes and deliver products that meet\u001b[0m\n", + "\u001b[1;38;5;40m > customer needs. Additionally, investing in training for our team to improve\u001b[0m\n", + "\u001b[1;38;5;40m > their skills in agile practices and tools could be beneficial. We could also\u001b[0m\n", + "\u001b[1;38;5;40m > explore partnerships with fintechs to learn from their agile processes and\u001b[0m\n", + "\u001b[1;38;5;40m > potentially integrate some of their practices into our own.\u001b[0m\n" ] }, "metadata": {}, @@ -609,11 +634,13 @@ { "data": { "text/html": [ - "
    Marcela Ferreira --> Marcela Ferreira: [THOUGHT] I will now act a bit, and then issue DONE.\n",
    +       "
    Lucas Almeida --> Lucas Almeida: [THOUGHT] \n",
    +       "                   > I will now act a bit, and then issue DONE.\n",
            "
    \n" ], "text/plain": [ - "\u001b[2;3;4;38;5;51mMarcela Ferreira\u001b[0m\u001b[2;3;38;5;51m --> \u001b[0m\u001b[2;3;4;38;5;51mMarcela Ferreira\u001b[0m\u001b[2;3;38;5;51m: \u001b[0m\u001b[1;2;3;38;5;51m[\u001b[0m\u001b[2;3;38;5;51mTHOUGHT\u001b[0m\u001b[1;2;3;38;5;51m]\u001b[0m\u001b[2;3;38;5;51m I will now act a bit, and then issue DONE.\u001b[0m\n" + "\u001b[2;3;4;38;5;51mLucas Almeida\u001b[0m\u001b[2;3;38;5;51m --> \u001b[0m\u001b[2;3;4;38;5;51mLucas Almeida\u001b[0m\u001b[2;3;38;5;51m: \u001b[0m\u001b[1;2;3;38;5;51m[\u001b[0m\u001b[2;3;38;5;51mTHOUGHT\u001b[0m\u001b[1;2;3;38;5;51m]\u001b[0m\u001b[2;3;38;5;51m \u001b[0m\n", + "\u001b[2;3;38;5;51m > I will now act a bit, and then issue DONE.\u001b[0m\n" ] }, "metadata": {}, @@ -622,11 +649,13 @@ { "data": { "text/html": [ - "
    Marcela Ferreira acts: [DONE] \n",
    +       "
    Lucas Almeida acts: [DONE] \n",
    +       "\n",
            "
    \n" ], "text/plain": [ - "\u001b[4;38;5;252mMarcela Ferreira\u001b[0m\u001b[38;5;252m acts: \u001b[0m\u001b[1;38;5;252m[\u001b[0m\u001b[38;5;252mDONE\u001b[0m\u001b[1;38;5;252m]\u001b[0m\u001b[38;5;252m \u001b[0m\n" + "\u001b[4;38;5;252mLucas Almeida\u001b[0m\u001b[38;5;252m acts: \u001b[0m\u001b[1;38;5;252m[\u001b[0m\u001b[38;5;252mDONE\u001b[0m\u001b[1;38;5;252m]\u001b[0m\u001b[38;5;252m \u001b[0m\n", + "\n" ] }, "metadata": {}, @@ -648,7 +677,7 @@ ], "metadata": { "kernelspec": { - "display_name": "base", + "display_name": "Python 3", "language": "python", "name": "python3" }, diff --git a/examples/online_advertisement_for_travel.ipynb b/examples/online_advertisement_for_travel.ipynb index 5cc1b5a..1f5b302 100644 --- a/examples/online_advertisement_for_travel.ipynb +++ b/examples/online_advertisement_for_travel.ipynb @@ -11,7 +11,7 @@ }, { "cell_type": "code", - "execution_count": 1, + "execution_count": null, "metadata": {}, "outputs": [ { @@ -39,8 +39,8 @@ "import tinytroupe\n", "from tinytroupe.agent import TinyPerson\n", "from tinytroupe.examples import create_lisa_the_data_scientist, create_oscar_the_architect, create_marcos_the_physician\n", - "from tinytroupe.personfactory import TinyPersonFactory\n", - "from tinytroupe.extraction import InteractionResultsExtractor" + "from tinytroupe.factory import TinyPersonFactory\n", + "from tinytroupe.extraction import ResultsExtractor" ] }, { @@ -944,7 +944,7 @@ }, { "cell_type": "code", - "execution_count": 7, + "execution_count": null, "metadata": {}, "outputs": [ { @@ -958,7 +958,7 @@ } ], "source": [ - "extractor = InteractionResultsExtractor()\n", + "extractor = ResultsExtractor()\n", "choices = []\n", "\n", "for person in people:\n", @@ -1882,7 +1882,7 @@ }, { "cell_type": "code", - "execution_count": 12, + "execution_count": null, "metadata": {}, "outputs": [ { @@ -1947,7 +1947,7 @@ } ], "source": [ - "extractor = InteractionResultsExtractor()\n", + "extractor = ResultsExtractor()\n", "\n", "choices =[]\n", "\n", diff --git a/examples/product_brainstorming.ipynb b/examples/product_brainstorming.ipynb index 5995e87..56c6343 100644 --- a/examples/product_brainstorming.ipynb +++ b/examples/product_brainstorming.ipynb @@ -2302,9 +2302,9 @@ } ], "source": [ - "from tinytroupe.extraction import InteractionResultsExtractor\n", + "from tinytroupe.extraction import ResultsExtractor\n", "\n", - "extractor = InteractionResultsExtractor()\n", + "extractor = ResultsExtractor()\n", "\n", "extractor.extract_results_from_agent(rapporteur, \n", " extraction_objective=\"Summarize the the ideas that the group came up with, explaining each idea as an item of a list.\" \\\n", diff --git a/examples/scratch/tool_usage.ipynb b/examples/scratch/tool_usage.ipynb index e3727e2..c06a2be 100644 --- a/examples/scratch/tool_usage.ipynb +++ b/examples/scratch/tool_usage.ipynb @@ -9,7 +9,7 @@ }, { "cell_type": "code", - "execution_count": 1, + "execution_count": null, "metadata": {}, "outputs": [ { @@ -28,14 +28,14 @@ "sys.path.append('../..')\n", "\n", "import tinytroupe\n", - "from tinytroupe.agent import TinyPerson, ToolUse\n", + "from tinytroupe.agent import TinyPerson, TinyToolUse\n", "from tinytroupe.environment import TinyWorld, TinySocialNetwork\n", - "from tinytroupe.personfactory import TinyPersonFactory\n", + "from tinytroupe.factory import TinyPersonFactory\n", "from tinytroupe.extraction import default_extractor as extractor\n", - "from tinytroupe.extraction import InteractionResultsReducer\n", + "from tinytroupe.extraction import ResultsReducer\n", "import tinytroupe.control as control\n", "from tinytroupe.extraction import ArtifactExporter\n", - "from tinytroupe.enrichment import Enricher\n", + "from tinytroupe.enrichment import TinyEnricher\n", "\n", "from tinytroupe.tools import TinyWordProcessor" ] @@ -72,13 +72,13 @@ }, { "cell_type": "code", - "execution_count": 4, + "execution_count": null, "metadata": {}, "outputs": [], "source": [ "exporter = ArtifactExporter(base_output_folder=\"./outputs/scratches/tool_usage\")\n", - "enricher = Enricher()\n", - "tooluse_faculty = ToolUse(tools=[TinyWordProcessor(exporter=exporter, enricher=enricher)])\n" + "enricher = TinyEnricher()\n", + "tooluse_faculty = TinyToolUse(tools=[TinyWordProcessor(exporter=exporter, enricher=enricher)])\n" ] }, { diff --git a/examples/simple_chat.ipynb b/examples/simple_chat.ipynb index 9ca7828..42ff5a3 100644 --- a/examples/simple_chat.ipynb +++ b/examples/simple_chat.ipynb @@ -18,15 +18,42 @@ "name": "stdout", "output_type": "stream", "text": [ - "Failed to find custom config on: c:\\Users\\pdasilva\\OneDrive - Microsoft\\Git repositories\\tinytroupe\\tinytroupe-core-opensource\\examples\\config.ini\n", - "Now switching to default config file...\n", - "Looking for config on: c:\\Users\\pdasilva\\OneDrive - Microsoft\\Git repositories\\tinytroupe\\tinytroupe-core-opensource\\examples\\..\\tinytroupe\\config.ini\n", "\n", "!!!!\n", "DISCLAIMER: TinyTroupe relies on Artificial Intelligence (AI) models to generate content. \n", "The AI models are not perfect and may produce inappropriate or inacurate results. \n", "For any serious or consequential use, please review the generated content before using it.\n", "!!!!\n", + "\n", + "Looking for default config on: c:\\Users\\pdasilva\\OneDrive - Microsoft\\Git repositories\\tinytroupe-opensource\\TinyTroupe\\examples\\..\\tinytroupe\\config.ini\n", + "Found custom config on: c:\\Users\\pdasilva\\OneDrive - Microsoft\\Git repositories\\tinytroupe-opensource\\TinyTroupe\\examples\\config.ini\n", + "\n", + "=================================\n", + "Current TinyTroupe configuration \n", + "=================================\n", + "[OpenAI]\n", + "api_type = openai\n", + "azure_api_version = 2023-05-15\n", + "model = gpt-4o\n", + "max_tokens = 4000\n", + "temperature = 0.3\n", + "freq_penalty = 0.0\n", + "presence_penalty = 0.0\n", + "timeout = 60\n", + "max_attempts = 5\n", + "waiting_time = 1\n", + "exponential_backoff_factor = 5\n", + "embedding_model = text-embedding-3-small\n", + "cache_api_calls = False\n", + "cache_file_name = openai_api_cache.pickle\n", + "max_content_display_length = 1024\n", + "\n", + "[Simulation]\n", + "rai_harmful_content_prevention = True\n", + "rai_copyright_infringement_prevention = True\n", + "\n", + "[Logging]\n", + "loglevel = INFO\n", "\n" ] } @@ -86,7 +113,7 @@ "name": "stderr", "output_type": "stream", "text": [ - "2024-11-04 23:08:03,283 - tinytroupe - INFO - [Chat Room] Running world simulation step 1 of 4.\n" + "2024-11-10 23:56:30,013 - tinytroupe - INFO - [Chat Room] Running world simulation step 1 of 4.\n" ] }, { @@ -106,9 +133,46 @@ "name": "stderr", "output_type": "stream", "text": [ - "2024-11-04 23:08:03,290 - tinytroupe - INFO - [Chat Room] No timedelta provided, so the datetime was not advanced.\n" + "2024-11-10 23:56:30,018 - tinytroupe - INFO - [Chat Room] No timedelta provided, so the datetime was not advanced.\n" + ] + }, + { + "data": { + "text/html": [ + "
    Lisa --> Lisa: [THOUGHT] \n",
    +       "          > I will now act a bit, and then issue DONE.\n",
    +       "
    \n" + ], + "text/plain": [ + "\u001b[2;3;4;38;5;51mLisa\u001b[0m\u001b[2;3;38;5;51m --> \u001b[0m\u001b[2;3;4;38;5;51mLisa\u001b[0m\u001b[2;3;38;5;51m: \u001b[0m\u001b[1;2;3;38;5;51m[\u001b[0m\u001b[2;3;38;5;51mTHOUGHT\u001b[0m\u001b[1;2;3;38;5;51m]\u001b[0m\u001b[2;3;38;5;51m \u001b[0m\n", + "\u001b[2;3;38;5;51m > I will now act a bit, and then issue DONE.\u001b[0m\n" + ] + }, + "metadata": {}, + "output_type": "display_data" + }, + { + "name": "stderr", + "output_type": "stream", + "text": [ + "2024-11-10 23:56:30,756 - tinytroupe - INFO - Waiting 1.0 seconds before next API request (to avoid throttling)...\n" ] }, + { + "data": { + "text/html": [ + "
    Lisa acts: [THINK] \n",
    +       "          > I should talk to Oscar to learn more about him.\n",
    +       "
    \n" + ], + "text/plain": [ + "\u001b[4;32mLisa\u001b[0m\u001b[32m acts: \u001b[0m\u001b[1;32m[\u001b[0m\u001b[32mTHINK\u001b[0m\u001b[1;32m]\u001b[0m\u001b[32m \u001b[0m\n", + "\u001b[32m > I should talk to Oscar to learn more about him.\u001b[0m\n" + ] + }, + "metadata": {}, + "output_type": "display_data" + }, { "data": { "text/html": [ @@ -128,19 +192,19 @@ "name": "stderr", "output_type": "stream", "text": [ - "2024-11-04 23:08:03,892 - tinytroupe - INFO - Waiting 2.0 seconds before next API request (to avoid throttling)...\n" + "2024-11-10 23:56:34,078 - tinytroupe - INFO - Waiting 1.0 seconds before next API request (to avoid throttling)...\n" ] }, { "data": { "text/html": [ "
    Lisa acts: [TALK] \n",
    -       "          > Hi Oscar, I'd love to know more about you. Could you tell me a bit about yourself?\n",
    +       "          > Hi Oscar, I was hoping to learn more about you. Could you tell me a bit about yourself?\n",
            "
    \n" ], "text/plain": [ "\u001b[1;4;38;5;40mLisa\u001b[0m\u001b[1;38;5;40m acts: \u001b[0m\u001b[1;38;5;40m[\u001b[0m\u001b[1;38;5;40mTALK\u001b[0m\u001b[1;38;5;40m]\u001b[0m\u001b[1;38;5;40m \u001b[0m\n", - "\u001b[1;38;5;40m > Hi Oscar, I'd love to know more about you. Could you tell me a bit about yourself?\u001b[0m\n" + "\u001b[1;38;5;40m > Hi Oscar, I was hoping to learn more about you. Could you tell me a bit about yourself?\u001b[0m\n" ] }, "metadata": {}, @@ -165,7 +229,7 @@ "name": "stderr", "output_type": "stream", "text": [ - "2024-11-04 23:08:08,148 - tinytroupe - INFO - Waiting 2.0 seconds before next API request (to avoid throttling)...\n" + "2024-11-10 23:56:38,922 - tinytroupe - INFO - Waiting 1.0 seconds before next API request (to avoid throttling)...\n" ] }, { @@ -187,12 +251,12 @@ "data": { "text/html": [ "
    Lisa --> Oscar: [CONVERSATION] \n",
    -       "          > Hi Oscar, I'd love to know more about you. Could you tell me a bit about yourself?\n",
    +       "          > Hi Oscar, I was hoping to learn more about you. Could you tell me a bit about yourself?\n",
            "
    \n" ], "text/plain": [ "\u001b[1;3;4;38;5;51mLisa\u001b[0m\u001b[1;3;38;5;51m --> \u001b[0m\u001b[1;3;4;38;5;51mOscar\u001b[0m\u001b[1;3;38;5;51m: \u001b[0m\u001b[1;3;38;5;51m[\u001b[0m\u001b[1;3;38;5;51mCONVERSATION\u001b[0m\u001b[1;3;38;5;51m]\u001b[0m\u001b[1;3;38;5;51m \u001b[0m\n", - "\u001b[1;3;38;5;51m > Hi Oscar, I'd love to know more about you. Could you tell me a bit about yourself?\u001b[0m\n" + "\u001b[1;3;38;5;51m > Hi Oscar, I was hoping to learn more about you. Could you tell me a bit about yourself?\u001b[0m\n" ] }, "metadata": {}, @@ -217,7 +281,7 @@ "name": "stderr", "output_type": "stream", "text": [ - "2024-11-04 23:08:12,043 - tinytroupe - INFO - Waiting 2.0 seconds before next API request (to avoid throttling)...\n" + "2024-11-10 23:56:41,859 - tinytroupe - INFO - Waiting 1.0 seconds before next API request (to avoid throttling)...\n" ] }, { @@ -225,19 +289,19 @@ "text/html": [ "
    Oscar acts: [TALK] \n",
            "           > Hi Lisa! Sure, I'd be happy to share a bit about myself. I'm Oscar, a 30-year-old\n",
    -       "           > architect from Germany. I work at a company called Awesome Inc., where I focus on\n",
    -       "           > designing standard elements for new apartment buildings. I love modernist architecture,\n",
    -       "           > new technologies, and sustainable practices. In my free time, I enjoy traveling to\n",
    -       "           > exotic places, playing the guitar, and reading science fiction books. How about you?\n",
    +       "           > architect from Germany. I work at Awesome Inc., where I focus on designing standard\n",
    +       "           > elements for new apartment buildings. I love modernist architecture, new technologies,\n",
    +       "           > and sustainable practices. In my free time, I enjoy traveling to exotic places, playing\n",
    +       "           > the guitar, and reading science fiction. How about you?\n",
            "
    \n" ], "text/plain": [ "\u001b[1;4;38;5;40mOscar\u001b[0m\u001b[1;38;5;40m acts: \u001b[0m\u001b[1;38;5;40m[\u001b[0m\u001b[1;38;5;40mTALK\u001b[0m\u001b[1;38;5;40m]\u001b[0m\u001b[1;38;5;40m \u001b[0m\n", "\u001b[1;38;5;40m > Hi Lisa! Sure, I'd be happy to share a bit about myself. I'm Oscar, a \u001b[0m\u001b[1;38;5;40m30\u001b[0m\u001b[1;38;5;40m-year-old\u001b[0m\n", - "\u001b[1;38;5;40m > architect from Germany. I work at a company called Awesome Inc., where I focus on\u001b[0m\n", - "\u001b[1;38;5;40m > designing standard elements for new apartment buildings. I love modernist architecture,\u001b[0m\n", - "\u001b[1;38;5;40m > new technologies, and sustainable practices. In my free time, I enjoy traveling to\u001b[0m\n", - "\u001b[1;38;5;40m > exotic places, playing the guitar, and reading science fiction books. How about you?\u001b[0m\n" + "\u001b[1;38;5;40m > architect from Germany. I work at Awesome Inc., where I focus on designing standard\u001b[0m\n", + "\u001b[1;38;5;40m > elements for new apartment buildings. I love modernist architecture, new technologies,\u001b[0m\n", + "\u001b[1;38;5;40m > and sustainable practices. In my free time, I enjoy traveling to exotic places, playing\u001b[0m\n", + "\u001b[1;38;5;40m > the guitar, and reading science fiction. How about you?\u001b[0m\n" ] }, "metadata": {}, @@ -262,7 +326,7 @@ "name": "stderr", "output_type": "stream", "text": [ - "2024-11-04 23:08:19,267 - tinytroupe - INFO - Waiting 2.0 seconds before next API request (to avoid throttling)...\n" + "2024-11-10 23:56:48,336 - tinytroupe - INFO - Waiting 1.0 seconds before next API request (to avoid throttling)...\n" ] }, { @@ -285,19 +349,19 @@ "text/html": [ "
    Oscar --> Lisa: [CONVERSATION] \n",
            "           > Hi Lisa! Sure, I'd be happy to share a bit about myself. I'm Oscar, a 30-year-old\n",
    -       "           > architect from Germany. I work at a company called Awesome Inc., where I focus on\n",
    -       "           > designing standard elements for new apartment buildings. I love modernist architecture,\n",
    -       "           > new technologies, and sustainable practices. In my free time, I enjoy traveling to\n",
    -       "           > exotic places, playing the guitar, and reading science fiction books. How about you?\n",
    +       "           > architect from Germany. I work at Awesome Inc., where I focus on designing standard\n",
    +       "           > elements for new apartment buildings. I love modernist architecture, new technologies,\n",
    +       "           > and sustainable practices. In my free time, I enjoy traveling to exotic places, playing\n",
    +       "           > the guitar, and reading science fiction. How about you?\n",
            "
    \n" ], "text/plain": [ "\u001b[1;3;4;38;5;51mOscar\u001b[0m\u001b[1;3;38;5;51m --> \u001b[0m\u001b[1;3;4;38;5;51mLisa\u001b[0m\u001b[1;3;38;5;51m: \u001b[0m\u001b[1;3;38;5;51m[\u001b[0m\u001b[1;3;38;5;51mCONVERSATION\u001b[0m\u001b[1;3;38;5;51m]\u001b[0m\u001b[1;3;38;5;51m \u001b[0m\n", "\u001b[1;3;38;5;51m > Hi Lisa! Sure, I'd be happy to share a bit about myself. I'm Oscar, a \u001b[0m\u001b[1;3;38;5;51m30\u001b[0m\u001b[1;3;38;5;51m-year-old\u001b[0m\n", - "\u001b[1;3;38;5;51m > architect from Germany. I work at a company called Awesome Inc., where I focus on\u001b[0m\n", - "\u001b[1;3;38;5;51m > designing standard elements for new apartment buildings. I love modernist architecture,\u001b[0m\n", - "\u001b[1;3;38;5;51m > new technologies, and sustainable practices. In my free time, I enjoy traveling to\u001b[0m\n", - "\u001b[1;3;38;5;51m > exotic places, playing the guitar, and reading science fiction books. How about you?\u001b[0m\n" + "\u001b[1;3;38;5;51m > architect from Germany. I work at Awesome Inc., where I focus on designing standard\u001b[0m\n", + "\u001b[1;3;38;5;51m > elements for new apartment buildings. I love modernist architecture, new technologies,\u001b[0m\n", + "\u001b[1;3;38;5;51m > and sustainable practices. In my free time, I enjoy traveling to exotic places, playing\u001b[0m\n", + "\u001b[1;3;38;5;51m > the guitar, and reading science fiction. How about you?\u001b[0m\n" ] }, "metadata": {}, @@ -307,7 +371,7 @@ "name": "stderr", "output_type": "stream", "text": [ - "2024-11-04 23:08:22,836 - tinytroupe - INFO - [Chat Room] Running world simulation step 2 of 4.\n" + "2024-11-10 23:56:52,344 - tinytroupe - INFO - [Chat Room] Running world simulation step 2 of 4.\n" ] }, { @@ -327,7 +391,7 @@ "name": "stderr", "output_type": "stream", "text": [ - "2024-11-04 23:08:22,839 - tinytroupe - INFO - [Chat Room] No timedelta provided, so the datetime was not advanced.\n" + "2024-11-10 23:56:52,349 - tinytroupe - INFO - [Chat Room] No timedelta provided, so the datetime was not advanced.\n" ] }, { @@ -349,27 +413,68 @@ "name": "stderr", "output_type": "stream", "text": [ - "2024-11-04 23:08:23,136 - tinytroupe - INFO - Waiting 2.0 seconds before next API request (to avoid throttling)...\n" + "2024-11-10 23:56:52,668 - tinytroupe - INFO - Waiting 1.0 seconds before next API request (to avoid throttling)...\n" + ] + }, + { + "data": { + "text/html": [ + "
    Lisa acts: [THINK] \n",
    +       "          > Oscar seems to have a fascinating background and interests. I should share a bit about\n",
    +       "          > myself in return.\n",
    +       "
    \n" + ], + "text/plain": [ + "\u001b[4;32mLisa\u001b[0m\u001b[32m acts: \u001b[0m\u001b[1;32m[\u001b[0m\u001b[32mTHINK\u001b[0m\u001b[1;32m]\u001b[0m\u001b[32m \u001b[0m\n", + "\u001b[32m > Oscar seems to have a fascinating background and interests. I should share a bit about\u001b[0m\n", + "\u001b[32m > myself in return.\u001b[0m\n" + ] + }, + "metadata": {}, + "output_type": "display_data" + }, + { + "data": { + "text/html": [ + "
    Lisa --> Lisa: [THOUGHT] \n",
    +       "          > I will now act a bit, and then issue DONE.\n",
    +       "
    \n" + ], + "text/plain": [ + "\u001b[2;3;4;38;5;51mLisa\u001b[0m\u001b[2;3;38;5;51m --> \u001b[0m\u001b[2;3;4;38;5;51mLisa\u001b[0m\u001b[2;3;38;5;51m: \u001b[0m\u001b[1;2;3;38;5;51m[\u001b[0m\u001b[2;3;38;5;51mTHOUGHT\u001b[0m\u001b[1;2;3;38;5;51m]\u001b[0m\u001b[2;3;38;5;51m \u001b[0m\n", + "\u001b[2;3;38;5;51m > I will now act a bit, and then issue DONE.\u001b[0m\n" + ] + }, + "metadata": {}, + "output_type": "display_data" + }, + { + "name": "stderr", + "output_type": "stream", + "text": [ + "2024-11-10 23:56:56,507 - tinytroupe - INFO - Waiting 1.0 seconds before next API request (to avoid throttling)...\n" ] }, { "data": { "text/html": [ "
    Lisa acts: [TALK] \n",
    -       "          > Thanks for sharing, Oscar! It's great to meet someone with such diverse interests. I'm\n",
    -       "          > Lisa, a data scientist from Canada. I work at Microsoft, focusing on improving search\n",
    -       "          > results using machine learning. In my free time, I enjoy cooking, playing the piano, and\n",
    -       "          > watching movies. It's fascinating to hear about your work in architecture and your\n",
    -       "          > passion for sustainable practices. Do you have a favorite project you've worked on?\n",
    +       "          > It's great to meet you, Oscar! I'm Lisa, a 28-year-old data scientist from Canada. I\n",
    +       "          > work at Microsoft, focusing on improving search results through data analysis and\n",
    +       "          > machine learning. I love learning new things, especially in AI and natural language\n",
    +       "          > processing. In my free time, I enjoy cooking, playing the piano, and watching movies,\n",
    +       "          > particularly comedies and thrillers. It's fascinating that you enjoy modernist\n",
    +       "          > architecture and sustainable practices. Have you traveled anywhere interesting recently?\n",
            "
    \n" ], "text/plain": [ "\u001b[1;4;38;5;40mLisa\u001b[0m\u001b[1;38;5;40m acts: \u001b[0m\u001b[1;38;5;40m[\u001b[0m\u001b[1;38;5;40mTALK\u001b[0m\u001b[1;38;5;40m]\u001b[0m\u001b[1;38;5;40m \u001b[0m\n", - "\u001b[1;38;5;40m > Thanks for sharing, Oscar! It's great to meet someone with such diverse interests. I'm\u001b[0m\n", - "\u001b[1;38;5;40m > Lisa, a data scientist from Canada. I work at Microsoft, focusing on improving search\u001b[0m\n", - "\u001b[1;38;5;40m > results using machine learning. In my free time, I enjoy cooking, playing the piano, and\u001b[0m\n", - "\u001b[1;38;5;40m > watching movies. It's fascinating to hear about your work in architecture and your\u001b[0m\n", - "\u001b[1;38;5;40m > passion for sustainable practices. Do you have a favorite project you've worked on?\u001b[0m\n" + "\u001b[1;38;5;40m > It's great to meet you, Oscar! I'm Lisa, a \u001b[0m\u001b[1;38;5;40m28\u001b[0m\u001b[1;38;5;40m-year-old data scientist from Canada. I\u001b[0m\n", + "\u001b[1;38;5;40m > work at Microsoft, focusing on improving search results through data analysis and\u001b[0m\n", + "\u001b[1;38;5;40m > machine learning. I love learning new things, especially in AI and natural language\u001b[0m\n", + "\u001b[1;38;5;40m > processing. In my free time, I enjoy cooking, playing the piano, and watching movies,\u001b[0m\n", + "\u001b[1;38;5;40m > particularly comedies and thrillers. It's fascinating that you enjoy modernist\u001b[0m\n", + "\u001b[1;38;5;40m > architecture and sustainable practices. Have you traveled anywhere interesting recently?\u001b[0m\n" ] }, "metadata": {}, @@ -394,7 +499,7 @@ "name": "stderr", "output_type": "stream", "text": [ - "2024-11-04 23:08:27,971 - tinytroupe - INFO - Waiting 2.0 seconds before next API request (to avoid throttling)...\n" + "2024-11-10 23:57:01,160 - tinytroupe - INFO - Waiting 1.0 seconds before next API request (to avoid throttling)...\n" ] }, { @@ -416,20 +521,22 @@ "data": { "text/html": [ "
    Lisa --> Oscar: [CONVERSATION] \n",
    -       "          > Thanks for sharing, Oscar! It's great to meet someone with such diverse interests. I'm\n",
    -       "          > Lisa, a data scientist from Canada. I work at Microsoft, focusing on improving search\n",
    -       "          > results using machine learning. In my free time, I enjoy cooking, playing the piano, and\n",
    -       "          > watching movies. It's fascinating to hear about your work in architecture and your\n",
    -       "          > passion for sustainable practices. Do you have a favorite project you've worked on?\n",
    +       "          > It's great to meet you, Oscar! I'm Lisa, a 28-year-old data scientist from Canada. I\n",
    +       "          > work at Microsoft, focusing on improving search results through data analysis and\n",
    +       "          > machine learning. I love learning new things, especially in AI and natural language\n",
    +       "          > processing. In my free time, I enjoy cooking, playing the piano, and watching movies,\n",
    +       "          > particularly comedies and thrillers. It's fascinating that you enjoy modernist\n",
    +       "          > architecture and sustainable practices. Have you traveled anywhere interesting recently?\n",
            "
    \n" ], "text/plain": [ "\u001b[1;3;4;38;5;51mLisa\u001b[0m\u001b[1;3;38;5;51m --> \u001b[0m\u001b[1;3;4;38;5;51mOscar\u001b[0m\u001b[1;3;38;5;51m: \u001b[0m\u001b[1;3;38;5;51m[\u001b[0m\u001b[1;3;38;5;51mCONVERSATION\u001b[0m\u001b[1;3;38;5;51m]\u001b[0m\u001b[1;3;38;5;51m \u001b[0m\n", - "\u001b[1;3;38;5;51m > Thanks for sharing, Oscar! It's great to meet someone with such diverse interests. I'm\u001b[0m\n", - "\u001b[1;3;38;5;51m > Lisa, a data scientist from Canada. I work at Microsoft, focusing on improving search\u001b[0m\n", - "\u001b[1;3;38;5;51m > results using machine learning. In my free time, I enjoy cooking, playing the piano, and\u001b[0m\n", - "\u001b[1;3;38;5;51m > watching movies. It's fascinating to hear about your work in architecture and your\u001b[0m\n", - "\u001b[1;3;38;5;51m > passion for sustainable practices. Do you have a favorite project you've worked on?\u001b[0m\n" + "\u001b[1;3;38;5;51m > It's great to meet you, Oscar! I'm Lisa, a \u001b[0m\u001b[1;3;38;5;51m28\u001b[0m\u001b[1;3;38;5;51m-year-old data scientist from Canada. I\u001b[0m\n", + "\u001b[1;3;38;5;51m > work at Microsoft, focusing on improving search results through data analysis and\u001b[0m\n", + "\u001b[1;3;38;5;51m > machine learning. I love learning new things, especially in AI and natural language\u001b[0m\n", + "\u001b[1;3;38;5;51m > processing. In my free time, I enjoy cooking, playing the piano, and watching movies,\u001b[0m\n", + "\u001b[1;3;38;5;51m > particularly comedies and thrillers. It's fascinating that you enjoy modernist\u001b[0m\n", + "\u001b[1;3;38;5;51m > architecture and sustainable practices. Have you traveled anywhere interesting recently?\u001b[0m\n" ] }, "metadata": {}, @@ -454,27 +561,27 @@ "name": "stderr", "output_type": "stream", "text": [ - "2024-11-04 23:08:31,759 - tinytroupe - INFO - Waiting 2.0 seconds before next API request (to avoid throttling)...\n" + "2024-11-10 23:57:04,321 - tinytroupe - INFO - Waiting 1.0 seconds before next API request (to avoid throttling)...\n" ] }, { "data": { "text/html": [ "
    Oscar acts: [TALK] \n",
    -       "           > Nice to meet you, Lisa! Your work sounds really interesting, especially with the focus\n",
    -       "           > on machine learning. As for my favorite project, I once worked on a sustainable housing\n",
    -       "           > development that incorporated a lot of green technologies and materials. It was\n",
    -       "           > challenging but incredibly rewarding to see it come to life. How about you? Do you have\n",
    -       "           > a favorite project or achievement in your field?\n",
    +       "           > Nice to meet you, Lisa! Your work sounds really interesting, especially with AI and\n",
    +       "           > machine learning. As for traveling, I recently visited Japan, which was an amazing\n",
    +       "           > experience. The blend of traditional and modern architecture there is truly inspiring.\n",
    +       "           > Plus, the food was incredible! How about you? Have you been on any exciting trips\n",
    +       "           > lately?\n",
            "
    \n" ], "text/plain": [ "\u001b[1;4;38;5;40mOscar\u001b[0m\u001b[1;38;5;40m acts: \u001b[0m\u001b[1;38;5;40m[\u001b[0m\u001b[1;38;5;40mTALK\u001b[0m\u001b[1;38;5;40m]\u001b[0m\u001b[1;38;5;40m \u001b[0m\n", - "\u001b[1;38;5;40m > Nice to meet you, Lisa! Your work sounds really interesting, especially with the focus\u001b[0m\n", - "\u001b[1;38;5;40m > on machine learning. As for my favorite project, I once worked on a sustainable housing\u001b[0m\n", - "\u001b[1;38;5;40m > development that incorporated a lot of green technologies and materials. It was\u001b[0m\n", - "\u001b[1;38;5;40m > challenging but incredibly rewarding to see it come to life. How about you? Do you have\u001b[0m\n", - "\u001b[1;38;5;40m > a favorite project or achievement in your field?\u001b[0m\n" + "\u001b[1;38;5;40m > Nice to meet you, Lisa! Your work sounds really interesting, especially with AI and\u001b[0m\n", + "\u001b[1;38;5;40m > machine learning. As for traveling, I recently visited Japan, which was an amazing\u001b[0m\n", + "\u001b[1;38;5;40m > experience. The blend of traditional and modern architecture there is truly inspiring.\u001b[0m\n", + "\u001b[1;38;5;40m > Plus, the food was incredible! How about you? Have you been on any exciting trips\u001b[0m\n", + "\u001b[1;38;5;40m > lately?\u001b[0m\n" ] }, "metadata": {}, @@ -499,7 +606,7 @@ "name": "stderr", "output_type": "stream", "text": [ - "2024-11-04 23:08:38,944 - tinytroupe - INFO - Waiting 2.0 seconds before next API request (to avoid throttling)...\n" + "2024-11-10 23:57:11,912 - tinytroupe - INFO - Waiting 1.0 seconds before next API request (to avoid throttling)...\n" ] }, { @@ -521,20 +628,20 @@ "data": { "text/html": [ "
    Oscar --> Lisa: [CONVERSATION] \n",
    -       "           > Nice to meet you, Lisa! Your work sounds really interesting, especially with the focus\n",
    -       "           > on machine learning. As for my favorite project, I once worked on a sustainable housing\n",
    -       "           > development that incorporated a lot of green technologies and materials. It was\n",
    -       "           > challenging but incredibly rewarding to see it come to life. How about you? Do you have\n",
    -       "           > a favorite project or achievement in your field?\n",
    +       "           > Nice to meet you, Lisa! Your work sounds really interesting, especially with AI and\n",
    +       "           > machine learning. As for traveling, I recently visited Japan, which was an amazing\n",
    +       "           > experience. The blend of traditional and modern architecture there is truly inspiring.\n",
    +       "           > Plus, the food was incredible! How about you? Have you been on any exciting trips\n",
    +       "           > lately?\n",
            "
    \n" ], "text/plain": [ "\u001b[1;3;4;38;5;51mOscar\u001b[0m\u001b[1;3;38;5;51m --> \u001b[0m\u001b[1;3;4;38;5;51mLisa\u001b[0m\u001b[1;3;38;5;51m: \u001b[0m\u001b[1;3;38;5;51m[\u001b[0m\u001b[1;3;38;5;51mCONVERSATION\u001b[0m\u001b[1;3;38;5;51m]\u001b[0m\u001b[1;3;38;5;51m \u001b[0m\n", - "\u001b[1;3;38;5;51m > Nice to meet you, Lisa! Your work sounds really interesting, especially with the focus\u001b[0m\n", - "\u001b[1;3;38;5;51m > on machine learning. As for my favorite project, I once worked on a sustainable housing\u001b[0m\n", - "\u001b[1;3;38;5;51m > development that incorporated a lot of green technologies and materials. It was\u001b[0m\n", - "\u001b[1;3;38;5;51m > challenging but incredibly rewarding to see it come to life. How about you? Do you have\u001b[0m\n", - "\u001b[1;3;38;5;51m > a favorite project or achievement in your field?\u001b[0m\n" + "\u001b[1;3;38;5;51m > Nice to meet you, Lisa! Your work sounds really interesting, especially with AI and\u001b[0m\n", + "\u001b[1;3;38;5;51m > machine learning. As for traveling, I recently visited Japan, which was an amazing\u001b[0m\n", + "\u001b[1;3;38;5;51m > experience. The blend of traditional and modern architecture there is truly inspiring.\u001b[0m\n", + "\u001b[1;3;38;5;51m > Plus, the food was incredible! How about you? Have you been on any exciting trips\u001b[0m\n", + "\u001b[1;3;38;5;51m > lately?\u001b[0m\n" ] }, "metadata": {}, @@ -544,7 +651,7 @@ "name": "stderr", "output_type": "stream", "text": [ - "2024-11-04 23:08:43,760 - tinytroupe - INFO - [Chat Room] Running world simulation step 3 of 4.\n" + "2024-11-10 23:57:15,516 - tinytroupe - INFO - [Chat Room] Running world simulation step 3 of 4.\n" ] }, { @@ -564,9 +671,48 @@ "name": "stderr", "output_type": "stream", "text": [ - "2024-11-04 23:08:43,762 - tinytroupe - INFO - [Chat Room] No timedelta provided, so the datetime was not advanced.\n" + "2024-11-10 23:57:15,519 - tinytroupe - INFO - [Chat Room] No timedelta provided, so the datetime was not advanced.\n" + ] + }, + { + "data": { + "text/html": [ + "
    Lisa --> Lisa: [THOUGHT] \n",
    +       "          > I will now act a bit, and then issue DONE.\n",
    +       "
    \n" + ], + "text/plain": [ + "\u001b[2;3;4;38;5;51mLisa\u001b[0m\u001b[2;3;38;5;51m --> \u001b[0m\u001b[2;3;4;38;5;51mLisa\u001b[0m\u001b[2;3;38;5;51m: \u001b[0m\u001b[1;2;3;38;5;51m[\u001b[0m\u001b[2;3;38;5;51mTHOUGHT\u001b[0m\u001b[1;2;3;38;5;51m]\u001b[0m\u001b[2;3;38;5;51m \u001b[0m\n", + "\u001b[2;3;38;5;51m > I will now act a bit, and then issue DONE.\u001b[0m\n" + ] + }, + "metadata": {}, + "output_type": "display_data" + }, + { + "name": "stderr", + "output_type": "stream", + "text": [ + "2024-11-10 23:57:15,840 - tinytroupe - INFO - Waiting 1.0 seconds before next API request (to avoid throttling)...\n" ] }, + { + "data": { + "text/html": [ + "
    Lisa acts: [THINK] \n",
    +       "          > Oscar's trip to Japan sounds amazing. I should share my recent travel experiences with\n",
    +       "          > him.\n",
    +       "
    \n" + ], + "text/plain": [ + "\u001b[4;32mLisa\u001b[0m\u001b[32m acts: \u001b[0m\u001b[1;32m[\u001b[0m\u001b[32mTHINK\u001b[0m\u001b[1;32m]\u001b[0m\u001b[32m \u001b[0m\n", + "\u001b[32m > Oscar's trip to Japan sounds amazing. I should share my recent travel experiences with\u001b[0m\n", + "\u001b[32m > him.\u001b[0m\n" + ] + }, + "metadata": {}, + "output_type": "display_data" + }, { "data": { "text/html": [ @@ -586,29 +732,25 @@ "name": "stderr", "output_type": "stream", "text": [ - "2024-11-04 23:08:44,077 - tinytroupe - INFO - Waiting 2.0 seconds before next API request (to avoid throttling)...\n" + "2024-11-10 23:57:19,118 - tinytroupe - INFO - Waiting 1.0 seconds before next API request (to avoid throttling)...\n" ] }, { "data": { "text/html": [ "
    Lisa acts: [TALK] \n",
    -       "          > That sounds like an amazing project, Oscar! It's inspiring to hear about sustainable\n",
    -       "          > developments. In my field, one of my favorite achievements was developing a machine\n",
    -       "          > learning model that significantly improved the relevance of search results for our\n",
    -       "          > users. It was a complex task, but seeing the positive impact it had was very fulfilling.\n",
    -       "          > I also enjoy collaborating with my team to tackle challenging data problems. It's always\n",
    -       "          > exciting to see how technology can enhance user experiences.\n",
    +       "          > Japan sounds incredible! I love how it combines the old and the new. I recently traveled\n",
    +       "          > to Italy, which was a dream come true. The history and art in places like Rome and\n",
    +       "          > Florence were breathtaking, and I couldn't get enough of the delicious pasta and gelato.\n",
    +       "          > Traveling really opens up new perspectives, doesn't it?\n",
            "
    \n" ], "text/plain": [ "\u001b[1;4;38;5;40mLisa\u001b[0m\u001b[1;38;5;40m acts: \u001b[0m\u001b[1;38;5;40m[\u001b[0m\u001b[1;38;5;40mTALK\u001b[0m\u001b[1;38;5;40m]\u001b[0m\u001b[1;38;5;40m \u001b[0m\n", - "\u001b[1;38;5;40m > That sounds like an amazing project, Oscar! It's inspiring to hear about sustainable\u001b[0m\n", - "\u001b[1;38;5;40m > developments. In my field, one of my favorite achievements was developing a machine\u001b[0m\n", - "\u001b[1;38;5;40m > learning model that significantly improved the relevance of search results for our\u001b[0m\n", - "\u001b[1;38;5;40m > users. It was a complex task, but seeing the positive impact it had was very fulfilling.\u001b[0m\n", - "\u001b[1;38;5;40m > I also enjoy collaborating with my team to tackle challenging data problems. It's always\u001b[0m\n", - "\u001b[1;38;5;40m > exciting to see how technology can enhance user experiences.\u001b[0m\n" + "\u001b[1;38;5;40m > Japan sounds incredible! I love how it combines the old and the new. I recently traveled\u001b[0m\n", + "\u001b[1;38;5;40m > to Italy, which was a dream come true. The history and art in places like Rome and\u001b[0m\n", + "\u001b[1;38;5;40m > Florence were breathtaking, and I couldn't get enough of the delicious pasta and gelato.\u001b[0m\n", + "\u001b[1;38;5;40m > Traveling really opens up new perspectives, doesn't it?\u001b[0m\n" ] }, "metadata": {}, @@ -633,7 +775,7 @@ "name": "stderr", "output_type": "stream", "text": [ - "2024-11-04 23:08:49,417 - tinytroupe - INFO - Waiting 2.0 seconds before next API request (to avoid throttling)...\n" + "2024-11-10 23:57:23,197 - tinytroupe - INFO - Waiting 1.0 seconds before next API request (to avoid throttling)...\n" ] }, { @@ -655,22 +797,18 @@ "data": { "text/html": [ "
    Lisa --> Oscar: [CONVERSATION] \n",
    -       "          > That sounds like an amazing project, Oscar! It's inspiring to hear about sustainable\n",
    -       "          > developments. In my field, one of my favorite achievements was developing a machine\n",
    -       "          > learning model that significantly improved the relevance of search results for our\n",
    -       "          > users. It was a complex task, but seeing the positive impact it had was very fulfilling.\n",
    -       "          > I also enjoy collaborating with my team to tackle challenging data problems. It's always\n",
    -       "          > exciting to see how technology can enhance user experiences.\n",
    +       "          > Japan sounds incredible! I love how it combines the old and the new. I recently traveled\n",
    +       "          > to Italy, which was a dream come true. The history and art in places like Rome and\n",
    +       "          > Florence were breathtaking, and I couldn't get enough of the delicious pasta and gelato.\n",
    +       "          > Traveling really opens up new perspectives, doesn't it?\n",
            "
    \n" ], "text/plain": [ "\u001b[1;3;4;38;5;51mLisa\u001b[0m\u001b[1;3;38;5;51m --> \u001b[0m\u001b[1;3;4;38;5;51mOscar\u001b[0m\u001b[1;3;38;5;51m: \u001b[0m\u001b[1;3;38;5;51m[\u001b[0m\u001b[1;3;38;5;51mCONVERSATION\u001b[0m\u001b[1;3;38;5;51m]\u001b[0m\u001b[1;3;38;5;51m \u001b[0m\n", - "\u001b[1;3;38;5;51m > That sounds like an amazing project, Oscar! It's inspiring to hear about sustainable\u001b[0m\n", - "\u001b[1;3;38;5;51m > developments. In my field, one of my favorite achievements was developing a machine\u001b[0m\n", - "\u001b[1;3;38;5;51m > learning model that significantly improved the relevance of search results for our\u001b[0m\n", - "\u001b[1;3;38;5;51m > users. It was a complex task, but seeing the positive impact it had was very fulfilling.\u001b[0m\n", - "\u001b[1;3;38;5;51m > I also enjoy collaborating with my team to tackle challenging data problems. It's always\u001b[0m\n", - "\u001b[1;3;38;5;51m > exciting to see how technology can enhance user experiences.\u001b[0m\n" + "\u001b[1;3;38;5;51m > Japan sounds incredible! I love how it combines the old and the new. I recently traveled\u001b[0m\n", + "\u001b[1;3;38;5;51m > to Italy, which was a dream come true. The history and art in places like Rome and\u001b[0m\n", + "\u001b[1;3;38;5;51m > Florence were breathtaking, and I couldn't get enough of the delicious pasta and gelato.\u001b[0m\n", + "\u001b[1;3;38;5;51m > Traveling really opens up new perspectives, doesn't it?\u001b[0m\n" ] }, "metadata": {}, @@ -695,25 +833,27 @@ "name": "stderr", "output_type": "stream", "text": [ - "2024-11-04 23:08:53,749 - tinytroupe - INFO - Waiting 2.0 seconds before next API request (to avoid throttling)...\n" + "2024-11-10 23:57:26,318 - tinytroupe - INFO - Waiting 1.0 seconds before next API request (to avoid throttling)...\n" ] }, { "data": { "text/html": [ "
    Oscar acts: [TALK] \n",
    -       "           > That's impressive, Lisa! It's amazing how technology can make such a difference in\n",
    -       "           > people's lives. Collaborating with a team to solve complex problems sounds like a\n",
    -       "           > rewarding experience. It's great to hear about your passion for improving user\n",
    -       "           > experiences. Do you have any exciting projects you're currently working on?\n",
    +       "           > Italy is such a beautiful country! The art and history there are truly remarkable, and\n",
    +       "           > I completely agree about the food—it's hard to beat authentic Italian cuisine.\n",
    +       "           > Traveling definitely broadens our horizons and gives us a deeper appreciation for\n",
    +       "           > different cultures and ways of life. Do you have any other destinations on your travel\n",
    +       "           > bucket list?\n",
            "
    \n" ], "text/plain": [ "\u001b[1;4;38;5;40mOscar\u001b[0m\u001b[1;38;5;40m acts: \u001b[0m\u001b[1;38;5;40m[\u001b[0m\u001b[1;38;5;40mTALK\u001b[0m\u001b[1;38;5;40m]\u001b[0m\u001b[1;38;5;40m \u001b[0m\n", - "\u001b[1;38;5;40m > That's impressive, Lisa! It's amazing how technology can make such a difference in\u001b[0m\n", - "\u001b[1;38;5;40m > people's lives. Collaborating with a team to solve complex problems sounds like a\u001b[0m\n", - "\u001b[1;38;5;40m > rewarding experience. It's great to hear about your passion for improving user\u001b[0m\n", - "\u001b[1;38;5;40m > experiences. Do you have any exciting projects you're currently working on?\u001b[0m\n" + "\u001b[1;38;5;40m > Italy is such a beautiful country! The art and history there are truly remarkable, and\u001b[0m\n", + "\u001b[1;38;5;40m > I completely agree about the food—it's hard to beat authentic Italian cuisine.\u001b[0m\n", + "\u001b[1;38;5;40m > Traveling definitely broadens our horizons and gives us a deeper appreciation for\u001b[0m\n", + "\u001b[1;38;5;40m > different cultures and ways of life. Do you have any other destinations on your travel\u001b[0m\n", + "\u001b[1;38;5;40m > bucket list?\u001b[0m\n" ] }, "metadata": {}, @@ -738,7 +878,7 @@ "name": "stderr", "output_type": "stream", "text": [ - "2024-11-04 23:08:59,675 - tinytroupe - INFO - Waiting 2.0 seconds before next API request (to avoid throttling)...\n" + "2024-11-10 23:57:31,093 - tinytroupe - INFO - Waiting 1.0 seconds before next API request (to avoid throttling)...\n" ] }, { @@ -760,18 +900,20 @@ "data": { "text/html": [ "
    Oscar --> Lisa: [CONVERSATION] \n",
    -       "           > That's impressive, Lisa! It's amazing how technology can make such a difference in\n",
    -       "           > people's lives. Collaborating with a team to solve complex problems sounds like a\n",
    -       "           > rewarding experience. It's great to hear about your passion for improving user\n",
    -       "           > experiences. Do you have any exciting projects you're currently working on?\n",
    +       "           > Italy is such a beautiful country! The art and history there are truly remarkable, and\n",
    +       "           > I completely agree about the food—it's hard to beat authentic Italian cuisine.\n",
    +       "           > Traveling definitely broadens our horizons and gives us a deeper appreciation for\n",
    +       "           > different cultures and ways of life. Do you have any other destinations on your travel\n",
    +       "           > bucket list?\n",
            "
    \n" ], "text/plain": [ "\u001b[1;3;4;38;5;51mOscar\u001b[0m\u001b[1;3;38;5;51m --> \u001b[0m\u001b[1;3;4;38;5;51mLisa\u001b[0m\u001b[1;3;38;5;51m: \u001b[0m\u001b[1;3;38;5;51m[\u001b[0m\u001b[1;3;38;5;51mCONVERSATION\u001b[0m\u001b[1;3;38;5;51m]\u001b[0m\u001b[1;3;38;5;51m \u001b[0m\n", - "\u001b[1;3;38;5;51m > That's impressive, Lisa! It's amazing how technology can make such a difference in\u001b[0m\n", - "\u001b[1;3;38;5;51m > people's lives. Collaborating with a team to solve complex problems sounds like a\u001b[0m\n", - "\u001b[1;3;38;5;51m > rewarding experience. It's great to hear about your passion for improving user\u001b[0m\n", - "\u001b[1;3;38;5;51m > experiences. Do you have any exciting projects you're currently working on?\u001b[0m\n" + "\u001b[1;3;38;5;51m > Italy is such a beautiful country! The art and history there are truly remarkable, and\u001b[0m\n", + "\u001b[1;3;38;5;51m > I completely agree about the food—it's hard to beat authentic Italian cuisine.\u001b[0m\n", + "\u001b[1;3;38;5;51m > Traveling definitely broadens our horizons and gives us a deeper appreciation for\u001b[0m\n", + "\u001b[1;3;38;5;51m > different cultures and ways of life. Do you have any other destinations on your travel\u001b[0m\n", + "\u001b[1;3;38;5;51m > bucket list?\u001b[0m\n" ] }, "metadata": {}, @@ -781,7 +923,7 @@ "name": "stderr", "output_type": "stream", "text": [ - "2024-11-04 23:09:04,530 - tinytroupe - INFO - [Chat Room] Running world simulation step 4 of 4.\n" + "2024-11-10 23:57:33,639 - tinytroupe - INFO - [Chat Room] Running world simulation step 4 of 4.\n" ] }, { @@ -801,7 +943,7 @@ "name": "stderr", "output_type": "stream", "text": [ - "2024-11-04 23:09:04,533 - tinytroupe - INFO - [Chat Room] No timedelta provided, so the datetime was not advanced.\n" + "2024-11-10 23:57:33,642 - tinytroupe - INFO - [Chat Room] No timedelta provided, so the datetime was not advanced.\n" ] }, { @@ -823,29 +965,66 @@ "name": "stderr", "output_type": "stream", "text": [ - "2024-11-04 23:09:04,826 - tinytroupe - INFO - Waiting 2.0 seconds before next API request (to avoid throttling)...\n" + "2024-11-10 23:57:34,009 - tinytroupe - INFO - Waiting 1.0 seconds before next API request (to avoid throttling)...\n" + ] + }, + { + "data": { + "text/html": [ + "
    Lisa acts: [THINK] \n",
    +       "          > Oscar is right about traveling broadening our horizons. I should share some destinations\n",
    +       "          > on my travel bucket list with him.\n",
    +       "
    \n" + ], + "text/plain": [ + "\u001b[4;32mLisa\u001b[0m\u001b[32m acts: \u001b[0m\u001b[1;32m[\u001b[0m\u001b[32mTHINK\u001b[0m\u001b[1;32m]\u001b[0m\u001b[32m \u001b[0m\n", + "\u001b[32m > Oscar is right about traveling broadening our horizons. I should share some destinations\u001b[0m\n", + "\u001b[32m > on my travel bucket list with him.\u001b[0m\n" + ] + }, + "metadata": {}, + "output_type": "display_data" + }, + { + "data": { + "text/html": [ + "
    Lisa --> Lisa: [THOUGHT] \n",
    +       "          > I will now act a bit, and then issue DONE.\n",
    +       "
    \n" + ], + "text/plain": [ + "\u001b[2;3;4;38;5;51mLisa\u001b[0m\u001b[2;3;38;5;51m --> \u001b[0m\u001b[2;3;4;38;5;51mLisa\u001b[0m\u001b[2;3;38;5;51m: \u001b[0m\u001b[1;2;3;38;5;51m[\u001b[0m\u001b[2;3;38;5;51mTHOUGHT\u001b[0m\u001b[1;2;3;38;5;51m]\u001b[0m\u001b[2;3;38;5;51m \u001b[0m\n", + "\u001b[2;3;38;5;51m > I will now act a bit, and then issue DONE.\u001b[0m\n" + ] + }, + "metadata": {}, + "output_type": "display_data" + }, + { + "name": "stderr", + "output_type": "stream", + "text": [ + "2024-11-10 23:57:38,542 - tinytroupe - INFO - Waiting 1.0 seconds before next API request (to avoid throttling)...\n" ] }, { "data": { "text/html": [ "
    Lisa acts: [TALK] \n",
    -       "          > Thank you, Oscar! I'm currently working on a project that involves enhancing natural\n",
    -       "          > language understanding in search queries. It's a challenging but exciting area, as it\n",
    -       "          > involves making search engines more intuitive and user-friendly. We're also exploring\n",
    -       "          > ways to incorporate more user feedback into our models to continuously improve the\n",
    -       "          > search experience. It's a dynamic field, and I'm always eager to see how our work can\n",
    -       "          > make a positive impact.\n",
    +       "          > I have a few places on my travel bucket list! I'd love to visit New Zealand for its\n",
    +       "          > stunning landscapes and outdoor adventures. Greece is also high on my list for its rich\n",
    +       "          > history and beautiful islands. And, of course, I'd love to explore more of Asia,\n",
    +       "          > especially South Korea and Thailand. How about you? Any dream destinations you haven't\n",
    +       "          > visited yet?\n",
            "
    \n" ], "text/plain": [ "\u001b[1;4;38;5;40mLisa\u001b[0m\u001b[1;38;5;40m acts: \u001b[0m\u001b[1;38;5;40m[\u001b[0m\u001b[1;38;5;40mTALK\u001b[0m\u001b[1;38;5;40m]\u001b[0m\u001b[1;38;5;40m \u001b[0m\n", - "\u001b[1;38;5;40m > Thank you, Oscar! I'm currently working on a project that involves enhancing natural\u001b[0m\n", - "\u001b[1;38;5;40m > language understanding in search queries. It's a challenging but exciting area, as it\u001b[0m\n", - "\u001b[1;38;5;40m > involves making search engines more intuitive and user-friendly. We're also exploring\u001b[0m\n", - "\u001b[1;38;5;40m > ways to incorporate more user feedback into our models to continuously improve the\u001b[0m\n", - "\u001b[1;38;5;40m > search experience. It's a dynamic field, and I'm always eager to see how our work can\u001b[0m\n", - "\u001b[1;38;5;40m > make a positive impact.\u001b[0m\n" + "\u001b[1;38;5;40m > I have a few places on my travel bucket list! I'd love to visit New Zealand for its\u001b[0m\n", + "\u001b[1;38;5;40m > stunning landscapes and outdoor adventures. Greece is also high on my list for its rich\u001b[0m\n", + "\u001b[1;38;5;40m > history and beautiful islands. And, of course, I'd love to explore more of Asia,\u001b[0m\n", + "\u001b[1;38;5;40m > especially South Korea and Thailand. How about you? Any dream destinations you haven't\u001b[0m\n", + "\u001b[1;38;5;40m > visited yet?\u001b[0m\n" ] }, "metadata": {}, @@ -870,7 +1049,7 @@ "name": "stderr", "output_type": "stream", "text": [ - "2024-11-04 23:09:10,495 - tinytroupe - INFO - Waiting 2.0 seconds before next API request (to avoid throttling)...\n" + "2024-11-10 23:57:44,608 - tinytroupe - INFO - Waiting 1.0 seconds before next API request (to avoid throttling)...\n" ] }, { @@ -892,22 +1071,20 @@ "data": { "text/html": [ "
    Lisa --> Oscar: [CONVERSATION] \n",
    -       "          > Thank you, Oscar! I'm currently working on a project that involves enhancing natural\n",
    -       "          > language understanding in search queries. It's a challenging but exciting area, as it\n",
    -       "          > involves making search engines more intuitive and user-friendly. We're also exploring\n",
    -       "          > ways to incorporate more user feedback into our models to continuously improve the\n",
    -       "          > search experience. It's a dynamic field, and I'm always eager to see how our work can\n",
    -       "          > make a positive impact.\n",
    +       "          > I have a few places on my travel bucket list! I'd love to visit New Zealand for its\n",
    +       "          > stunning landscapes and outdoor adventures. Greece is also high on my list for its rich\n",
    +       "          > history and beautiful islands. And, of course, I'd love to explore more of Asia,\n",
    +       "          > especially South Korea and Thailand. How about you? Any dream destinations you haven't\n",
    +       "          > visited yet?\n",
            "
    \n" ], "text/plain": [ "\u001b[1;3;4;38;5;51mLisa\u001b[0m\u001b[1;3;38;5;51m --> \u001b[0m\u001b[1;3;4;38;5;51mOscar\u001b[0m\u001b[1;3;38;5;51m: \u001b[0m\u001b[1;3;38;5;51m[\u001b[0m\u001b[1;3;38;5;51mCONVERSATION\u001b[0m\u001b[1;3;38;5;51m]\u001b[0m\u001b[1;3;38;5;51m \u001b[0m\n", - "\u001b[1;3;38;5;51m > Thank you, Oscar! I'm currently working on a project that involves enhancing natural\u001b[0m\n", - "\u001b[1;3;38;5;51m > language understanding in search queries. It's a challenging but exciting area, as it\u001b[0m\n", - "\u001b[1;3;38;5;51m > involves making search engines more intuitive and user-friendly. We're also exploring\u001b[0m\n", - "\u001b[1;3;38;5;51m > ways to incorporate more user feedback into our models to continuously improve the\u001b[0m\n", - "\u001b[1;3;38;5;51m > search experience. It's a dynamic field, and I'm always eager to see how our work can\u001b[0m\n", - "\u001b[1;3;38;5;51m > make a positive impact.\u001b[0m\n" + "\u001b[1;3;38;5;51m > I have a few places on my travel bucket list! I'd love to visit New Zealand for its\u001b[0m\n", + "\u001b[1;3;38;5;51m > stunning landscapes and outdoor adventures. Greece is also high on my list for its rich\u001b[0m\n", + "\u001b[1;3;38;5;51m > history and beautiful islands. And, of course, I'd love to explore more of Asia,\u001b[0m\n", + "\u001b[1;3;38;5;51m > especially South Korea and Thailand. How about you? Any dream destinations you haven't\u001b[0m\n", + "\u001b[1;3;38;5;51m > visited yet?\u001b[0m\n" ] }, "metadata": {}, @@ -932,27 +1109,27 @@ "name": "stderr", "output_type": "stream", "text": [ - "2024-11-04 23:09:16,240 - tinytroupe - INFO - Waiting 2.0 seconds before next API request (to avoid throttling)...\n" + "2024-11-10 23:57:47,900 - tinytroupe - INFO - Waiting 1.0 seconds before next API request (to avoid throttling)...\n" ] }, { "data": { "text/html": [ "
    Oscar acts: [TALK] \n",
    -       "           > That sounds like a fascinating project, Lisa! Enhancing natural language understanding\n",
    -       "           > is such an important area, and it's great to hear that you're working on making search\n",
    -       "           > engines more intuitive. Incorporating user feedback sounds like a smart approach to\n",
    -       "           > keep improving the experience. It's exciting to be part of a field that's constantly\n",
    -       "           > evolving and making a real difference. Keep up the great work!\n",
    +       "           > Those sound like amazing destinations! New Zealand's landscapes are indeed\n",
    +       "           > breathtaking, and Greece's history is fascinating. For me, I'd love to visit Iceland\n",
    +       "           > for its unique natural wonders like the Northern Lights and geysers. South America is\n",
    +       "           > also on my list, particularly Peru for Machu Picchu and Brazil for its vibrant culture.\n",
    +       "           > Traveling is such a wonderful way to experience the world!\n",
            "
    \n" ], "text/plain": [ "\u001b[1;4;38;5;40mOscar\u001b[0m\u001b[1;38;5;40m acts: \u001b[0m\u001b[1;38;5;40m[\u001b[0m\u001b[1;38;5;40mTALK\u001b[0m\u001b[1;38;5;40m]\u001b[0m\u001b[1;38;5;40m \u001b[0m\n", - "\u001b[1;38;5;40m > That sounds like a fascinating project, Lisa! Enhancing natural language understanding\u001b[0m\n", - "\u001b[1;38;5;40m > is such an important area, and it's great to hear that you're working on making search\u001b[0m\n", - "\u001b[1;38;5;40m > engines more intuitive. Incorporating user feedback sounds like a smart approach to\u001b[0m\n", - "\u001b[1;38;5;40m > keep improving the experience. It's exciting to be part of a field that's constantly\u001b[0m\n", - "\u001b[1;38;5;40m > evolving and making a real difference. Keep up the great work!\u001b[0m\n" + "\u001b[1;38;5;40m > Those sound like amazing destinations! New Zealand's landscapes are indeed\u001b[0m\n", + "\u001b[1;38;5;40m > breathtaking, and Greece's history is fascinating. For me, I'd love to visit Iceland\u001b[0m\n", + "\u001b[1;38;5;40m > for its unique natural wonders like the Northern Lights and geysers. South America is\u001b[0m\n", + "\u001b[1;38;5;40m > also on my list, particularly Peru for Machu Picchu and Brazil for its vibrant culture.\u001b[0m\n", + "\u001b[1;38;5;40m > Traveling is such a wonderful way to experience the world!\u001b[0m\n" ] }, "metadata": {}, @@ -977,7 +1154,7 @@ "name": "stderr", "output_type": "stream", "text": [ - "2024-11-04 23:09:24,543 - tinytroupe - INFO - Waiting 2.0 seconds before next API request (to avoid throttling)...\n" + "2024-11-10 23:57:52,174 - tinytroupe - INFO - Waiting 1.0 seconds before next API request (to avoid throttling)...\n" ] }, { @@ -999,20 +1176,20 @@ "data": { "text/html": [ "
    Oscar --> Lisa: [CONVERSATION] \n",
    -       "           > That sounds like a fascinating project, Lisa! Enhancing natural language understanding\n",
    -       "           > is such an important area, and it's great to hear that you're working on making search\n",
    -       "           > engines more intuitive. Incorporating user feedback sounds like a smart approach to\n",
    -       "           > keep improving the experience. It's exciting to be part of a field that's constantly\n",
    -       "           > evolving and making a real difference. Keep up the great work!\n",
    +       "           > Those sound like amazing destinations! New Zealand's landscapes are indeed\n",
    +       "           > breathtaking, and Greece's history is fascinating. For me, I'd love to visit Iceland\n",
    +       "           > for its unique natural wonders like the Northern Lights and geysers. South America is\n",
    +       "           > also on my list, particularly Peru for Machu Picchu and Brazil for its vibrant culture.\n",
    +       "           > Traveling is such a wonderful way to experience the world!\n",
            "
    \n" ], "text/plain": [ "\u001b[1;3;4;38;5;51mOscar\u001b[0m\u001b[1;3;38;5;51m --> \u001b[0m\u001b[1;3;4;38;5;51mLisa\u001b[0m\u001b[1;3;38;5;51m: \u001b[0m\u001b[1;3;38;5;51m[\u001b[0m\u001b[1;3;38;5;51mCONVERSATION\u001b[0m\u001b[1;3;38;5;51m]\u001b[0m\u001b[1;3;38;5;51m \u001b[0m\n", - "\u001b[1;3;38;5;51m > That sounds like a fascinating project, Lisa! Enhancing natural language understanding\u001b[0m\n", - "\u001b[1;3;38;5;51m > is such an important area, and it's great to hear that you're working on making search\u001b[0m\n", - "\u001b[1;3;38;5;51m > engines more intuitive. Incorporating user feedback sounds like a smart approach to\u001b[0m\n", - "\u001b[1;3;38;5;51m > keep improving the experience. It's exciting to be part of a field that's constantly\u001b[0m\n", - "\u001b[1;3;38;5;51m > evolving and making a real difference. Keep up the great work!\u001b[0m\n" + "\u001b[1;3;38;5;51m > Those sound like amazing destinations! New Zealand's landscapes are indeed\u001b[0m\n", + "\u001b[1;3;38;5;51m > breathtaking, and Greece's history is fascinating. For me, I'd love to visit Iceland\u001b[0m\n", + "\u001b[1;3;38;5;51m > for its unique natural wonders like the Northern Lights and geysers. South America is\u001b[0m\n", + "\u001b[1;3;38;5;51m > also on my list, particularly Peru for Machu Picchu and Brazil for its vibrant culture.\u001b[0m\n", + "\u001b[1;3;38;5;51m > Traveling is such a wonderful way to experience the world!\u001b[0m\n" ] }, "metadata": {}, @@ -1026,199 +1203,251 @@ }, { "cell_type": "code", - "execution_count": 5, + "execution_count": 6, "metadata": {}, "outputs": [ { "data": { "text/html": [ - "
    >>>>>>>>> Date and time of events: 2024-11-04T23:08:03.173089\n",
    +       "
    >>>>>>>>> Date and time of events: 2024-11-10T23:56:29.864490\n",
            "USER --> Lisa: [CONVERSATION] \n",
            "          > Talk to Oscar to know more about him\n",
    -       ">>>>>>>>> Date and time of events: 2024-11-04T23:08:03.173089\n",
    +       ">>>>>>>>> Date and time of events: 2024-11-10T23:56:29.864490\n",
    +       "Lisa --> Lisa: [THOUGHT] \n",
    +       "          > I will now act a bit, and then issue DONE.\n",
    +       ">>>>>>>>> Date and time of events: 2024-11-10T23:56:29.864490\n",
    +       "Lisa acts: [THINK] \n",
    +       "          > I should talk to Oscar to learn more about him.\n",
    +       ">>>>>>>>> Date and time of events: 2024-11-10T23:56:29.864490\n",
            "Lisa --> Lisa: [THOUGHT] \n",
            "          > I will now act a bit, and then issue DONE.\n",
    -       ">>>>>>>>> Date and time of events: 2024-11-04T23:08:03.173089\n",
    +       ">>>>>>>>> Date and time of events: 2024-11-10T23:56:29.864490\n",
            "Lisa acts: [TALK] \n",
    -       "          > Hi Oscar, I'd love to know more about you. Could you tell me a bit about yourself?\n",
    -       ">>>>>>>>> Date and time of events: 2024-11-04T23:08:03.173089\n",
    +       "          > Hi Oscar, I was hoping to learn more about you. Could you tell me a bit about yourself?\n",
    +       ">>>>>>>>> Date and time of events: 2024-11-10T23:56:29.864490\n",
            "Lisa --> Lisa: [THOUGHT] \n",
            "          > I will now act a bit, and then issue DONE.\n",
    -       ">>>>>>>>> Date and time of events: 2024-11-04T23:08:03.173089\n",
    +       ">>>>>>>>> Date and time of events: 2024-11-10T23:56:29.864490\n",
            "Lisa acts: [DONE] \n",
            "\n",
    -       ">>>>>>>>> Date and time of events: 2024-11-04T23:08:03.173089\n",
    +       ">>>>>>>>> Date and time of events: 2024-11-10T23:56:29.864490\n",
            "Oscar --> Lisa: [CONVERSATION] \n",
            "           > Hi Lisa! Sure, I'd be happy to share a bit about myself. I'm Oscar, a 30-year-old\n",
    -       "           > architect from Germany. I work at a company called Awesome Inc., where I focus on\n",
    -       "           > designing standard elements for new apartment buildings. I love modernist architecture,\n",
    -       "           > new technologies, and sustainable practices. In my free time, I enjoy traveling to\n",
    -       "           > exotic places, playing the guitar, and reading science fiction books. How about you?\n",
    -       ">>>>>>>>> Date and time of events: 2024-11-04T23:08:03.173089\n",
    +       "           > architect from Germany. I work at Awesome Inc., where I focus on designing standard\n",
    +       "           > elements for new apartment buildings. I love modernist architecture, new technologies,\n",
    +       "           > and sustainable practices. In my free time, I enjoy traveling to exotic places, playing\n",
    +       "           > the guitar, and reading science fiction. How about you?\n",
    +       ">>>>>>>>> Date and time of events: 2024-11-10T23:56:29.864490\n",
    +       "Lisa --> Lisa: [THOUGHT] \n",
    +       "          > I will now act a bit, and then issue DONE.\n",
    +       ">>>>>>>>> Date and time of events: 2024-11-10T23:56:29.864490\n",
    +       "Lisa acts: [THINK] \n",
    +       "          > Oscar seems to have a fascinating background and interests. I should share a bit about\n",
    +       "          > myself in return.\n",
    +       ">>>>>>>>> Date and time of events: 2024-11-10T23:56:29.864490\n",
            "Lisa --> Lisa: [THOUGHT] \n",
            "          > I will now act a bit, and then issue DONE.\n",
    -       ">>>>>>>>> Date and time of events: 2024-11-04T23:08:03.173089\n",
    +       ">>>>>>>>> Date and time of events: 2024-11-10T23:56:29.864490\n",
            "Lisa acts: [TALK] \n",
    -       "          > Thanks for sharing, Oscar! It's great to meet someone with such diverse interests. I'm\n",
    -       "          > Lisa, a data scientist from Canada. I work at Microsoft, focusing on improving search\n",
    -       "          > results using machine learning. In my free time, I enjoy cooking, playing the piano, and\n",
    -       "          > watching movies. It's fascinating to hear about your work in architecture and your\n",
    -       "          > passion for sustainable practices. Do you have a favorite project you've worked on?\n",
    -       ">>>>>>>>> Date and time of events: 2024-11-04T23:08:03.173089\n",
    +       "          > It's great to meet you, Oscar! I'm Lisa, a 28-year-old data scientist from Canada. I\n",
    +       "          > work at Microsoft, focusing on improving search results through data analysis and\n",
    +       "          > machine learning. I love learning new things, especially in AI and natural language\n",
    +       "          > processing. In my free time, I enjoy cooking, playing the piano, and watching movies,\n",
    +       "          > particularly comedies and thrillers. It's fascinating that you enjoy modernist\n",
    +       "          > architecture and sustainable practices. Have you traveled anywhere interesting recently?\n",
    +       ">>>>>>>>> Date and time of events: 2024-11-10T23:56:29.864490\n",
            "Lisa --> Lisa: [THOUGHT] \n",
            "          > I will now act a bit, and then issue DONE.\n",
    -       ">>>>>>>>> Date and time of events: 2024-11-04T23:08:03.173089\n",
    +       ">>>>>>>>> Date and time of events: 2024-11-10T23:56:29.864490\n",
            "Lisa acts: [DONE] \n",
            "\n",
    -       ">>>>>>>>> Date and time of events: 2024-11-04T23:08:03.173089\n",
    +       ">>>>>>>>> Date and time of events: 2024-11-10T23:56:29.864490\n",
            "Oscar --> Lisa: [CONVERSATION] \n",
    -       "           > Nice to meet you, Lisa! Your work sounds really interesting, especially with the focus\n",
    -       "           > on machine learning. As for my favorite project, I once worked on a sustainable housing\n",
    -       "           > development that incorporated a lot of green technologies and materials. It was\n",
    -       "           > challenging but incredibly rewarding to see it come to life. How about you? Do you have\n",
    -       "           > a favorite project or achievement in your field?\n",
    -       ">>>>>>>>> Date and time of events: 2024-11-04T23:08:03.173089\n",
    +       "           > Nice to meet you, Lisa! Your work sounds really interesting, especially with AI and\n",
    +       "           > machine learning. As for traveling, I recently visited Japan, which was an amazing\n",
    +       "           > experience. The blend of traditional and modern architecture there is truly inspiring.\n",
    +       "           > Plus, the food was incredible! How about you? Have you been on any exciting trips\n",
    +       "           > lately?\n",
    +       ">>>>>>>>> Date and time of events: 2024-11-10T23:56:29.864490\n",
            "Lisa --> Lisa: [THOUGHT] \n",
            "          > I will now act a bit, and then issue DONE.\n",
    -       ">>>>>>>>> Date and time of events: 2024-11-04T23:08:03.173089\n",
    +       ">>>>>>>>> Date and time of events: 2024-11-10T23:56:29.864490\n",
    +       "Lisa acts: [THINK] \n",
    +       "          > Oscar's trip to Japan sounds amazing. I should share my recent travel experiences with\n",
    +       "          > him.\n",
    +       ">>>>>>>>> Date and time of events: 2024-11-10T23:56:29.864490\n",
    +       "Lisa --> Lisa: [THOUGHT] \n",
    +       "          > I will now act a bit, and then issue DONE.\n",
    +       ">>>>>>>>> Date and time of events: 2024-11-10T23:56:29.864490\n",
            "Lisa acts: [TALK] \n",
    -       "          > That sounds like an amazing project, Oscar! It's inspiring to hear about sustainable\n",
    -       "          > developments. In my field, one of my favorite achievements was developing a machine\n",
    -       "          > learning model that significantly improved the relevance of search results for our\n",
    -       "          > users. It was a complex task, but seeing the positive impact it had was very fulfilling.\n",
    -       "          > I also enjoy collaborating with my team to tackle challenging data problems. It's always\n",
    -       "          > exciting to see how technology can enhance user experiences.\n",
    -       ">>>>>>>>> Date and time of events: 2024-11-04T23:08:03.173089\n",
    +       "          > Japan sounds incredible! I love how it combines the old and the new. I recently traveled\n",
    +       "          > to Italy, which was a dream come true. The history and art in places like Rome and\n",
    +       "          > Florence were breathtaking, and I couldn't get enough of the delicious pasta and gelato.\n",
    +       "          > Traveling really opens up new perspectives, doesn't it?\n",
    +       ">>>>>>>>> Date and time of events: 2024-11-10T23:56:29.864490\n",
            "Lisa --> Lisa: [THOUGHT] \n",
            "          > I will now act a bit, and then issue DONE.\n",
    -       ">>>>>>>>> Date and time of events: 2024-11-04T23:08:03.173089\n",
    +       ">>>>>>>>> Date and time of events: 2024-11-10T23:56:29.864490\n",
            "Lisa acts: [DONE] \n",
            "\n",
    -       ">>>>>>>>> Date and time of events: 2024-11-04T23:08:03.173089\n",
    +       ">>>>>>>>> Date and time of events: 2024-11-10T23:56:29.864490\n",
            "Oscar --> Lisa: [CONVERSATION] \n",
    -       "           > That's impressive, Lisa! It's amazing how technology can make such a difference in\n",
    -       "           > people's lives. Collaborating with a team to solve complex problems sounds like a\n",
    -       "           > rewarding experience. It's great to hear about your passion for improving user\n",
    -       "           > experiences. Do you have any exciting projects you're currently working on?\n",
    -       ">>>>>>>>> Date and time of events: 2024-11-04T23:08:03.173089\n",
    +       "           > Italy is such a beautiful country! The art and history there are truly remarkable, and\n",
    +       "           > I completely agree about the food—it's hard to beat authentic Italian cuisine.\n",
    +       "           > Traveling definitely broadens our horizons and gives us a deeper appreciation for\n",
    +       "           > different cultures and ways of life. Do you have any other destinations on your travel\n",
    +       "           > bucket list?\n",
    +       ">>>>>>>>> Date and time of events: 2024-11-10T23:56:29.864490\n",
    +       "Lisa --> Lisa: [THOUGHT] \n",
    +       "          > I will now act a bit, and then issue DONE.\n",
    +       ">>>>>>>>> Date and time of events: 2024-11-10T23:56:29.864490\n",
    +       "Lisa acts: [THINK] \n",
    +       "          > Oscar is right about traveling broadening our horizons. I should share some destinations\n",
    +       "          > on my travel bucket list with him.\n",
    +       ">>>>>>>>> Date and time of events: 2024-11-10T23:56:29.864490\n",
            "Lisa --> Lisa: [THOUGHT] \n",
            "          > I will now act a bit, and then issue DONE.\n",
    -       ">>>>>>>>> Date and time of events: 2024-11-04T23:08:03.173089\n",
    +       ">>>>>>>>> Date and time of events: 2024-11-10T23:56:29.864490\n",
            "Lisa acts: [TALK] \n",
    -       "          > Thank you, Oscar! I'm currently working on a project that involves enhancing natural\n",
    -       "          > language understanding in search queries. It's a challenging but exciting area, as it\n",
    -       "          > involves making search engines more intuitive and user-friendly. We're also exploring\n",
    -       "          > ways to incorporate more user feedback into our models to continuously improve the\n",
    -       "          > search experience. It's a dynamic field, and I'm always eager to see how our work can\n",
    -       "          > make a positive impact.\n",
    -       ">>>>>>>>> Date and time of events: 2024-11-04T23:08:03.173089\n",
    +       "          > I have a few places on my travel bucket list! I'd love to visit New Zealand for its\n",
    +       "          > stunning landscapes and outdoor adventures. Greece is also high on my list for its rich\n",
    +       "          > history and beautiful islands. And, of course, I'd love to explore more of Asia,\n",
    +       "          > especially South Korea and Thailand. How about you? Any dream destinations you haven't\n",
    +       "          > visited yet?\n",
    +       ">>>>>>>>> Date and time of events: 2024-11-10T23:56:29.864490\n",
            "Lisa --> Lisa: [THOUGHT] \n",
            "          > I will now act a bit, and then issue DONE.\n",
    -       ">>>>>>>>> Date and time of events: 2024-11-04T23:08:03.173089\n",
    +       ">>>>>>>>> Date and time of events: 2024-11-10T23:56:29.864490\n",
            "Lisa acts: [DONE] \n",
            "\n",
    -       ">>>>>>>>> Date and time of events: 2024-11-04T23:08:03.173089\n",
    +       ">>>>>>>>> Date and time of events: 2024-11-10T23:56:29.864490\n",
            "Oscar --> Lisa: [CONVERSATION] \n",
    -       "           > That sounds like a fascinating project, Lisa! Enhancing natural language understanding\n",
    -       "           > is such an important area, and it's great to hear that you're working on making search\n",
    -       "           > engines more intuitive. Incorporating user feedback sounds like a smart approach to\n",
    -       "           > keep improving the experience. It's exciting to be part of a field that's constantly\n",
    -       "           > evolving and making a real difference. Keep up the great work!\n",
    +       "           > Those sound like amazing destinations! New Zealand's landscapes are indeed\n",
    +       "           > breathtaking, and Greece's history is fascinating. For me, I'd love to visit Iceland\n",
    +       "           > for its unique natural wonders like the Northern Lights and geysers. South America is\n",
    +       "           > also on my list, particularly Peru for Machu Picchu and Brazil for its vibrant culture.\n",
    +       "           > Traveling is such a wonderful way to experience the world!\n",
            "
    \n" ], "text/plain": [ - ">>>>>>>>> Date and time of events: \u001b[1;36m2024\u001b[0m-\u001b[1;36m11\u001b[0m-04T\u001b[1;92m23:08:03\u001b[0m.\u001b[1;36m173089\u001b[0m\n", + ">>>>>>>>> Date and time of events: \u001b[1;36m2024\u001b[0m-\u001b[1;36m11\u001b[0m-10T\u001b[1;92m23:56:29\u001b[0m.\u001b[1;36m864490\u001b[0m\n", "\u001b[1;3;4;38;5;51mUSER\u001b[0m\u001b[1;3;38;5;51m --> \u001b[0m\u001b[1;3;4;38;5;51mLisa\u001b[0m\u001b[1;3;38;5;51m: \u001b[0m\u001b[1;3;38;5;51m[\u001b[0m\u001b[1;3;38;5;51mCONVERSATION\u001b[0m\u001b[1;3;38;5;51m]\u001b[0m\u001b[1;3;38;5;51m \u001b[0m\n", "\u001b[1;3;38;5;51m > Talk to Oscar to know more about him\u001b[0m\n", - "\u001b[1;3;38;5;51m>>>>>>>>> Date and time of events: \u001b[0m\u001b[1;3;38;5;51m2024\u001b[0m\u001b[1;3;38;5;51m-\u001b[0m\u001b[1;3;38;5;51m11\u001b[0m\u001b[1;3;38;5;51m-04T\u001b[0m\u001b[1;3;38;5;51m23:08:03\u001b[0m\u001b[1;3;38;5;51m.\u001b[0m\u001b[1;3;38;5;51m173089\u001b[0m\n", + "\u001b[1;3;38;5;51m>>>>>>>>> Date and time of events: \u001b[0m\u001b[1;3;38;5;51m2024\u001b[0m\u001b[1;3;38;5;51m-\u001b[0m\u001b[1;3;38;5;51m11\u001b[0m\u001b[1;3;38;5;51m-10T\u001b[0m\u001b[1;3;38;5;51m23:56:29\u001b[0m\u001b[1;3;38;5;51m.\u001b[0m\u001b[1;3;38;5;51m864490\u001b[0m\n", "\u001b[1;2;3;4;38;5;51mLisa\u001b[0m\u001b[1;2;3;38;5;51m --> \u001b[0m\u001b[1;2;3;4;38;5;51mLisa\u001b[0m\u001b[1;2;3;38;5;51m: \u001b[0m\u001b[1;2;3;38;5;51m[\u001b[0m\u001b[1;2;3;38;5;51mTHOUGHT\u001b[0m\u001b[1;2;3;38;5;51m]\u001b[0m\u001b[1;2;3;38;5;51m \u001b[0m\n", "\u001b[1;2;3;38;5;51m > I will now act a bit, and then issue DONE.\u001b[0m\n", - "\u001b[1;2;3;38;5;51m>>>>>>>>> Date and time of events: \u001b[0m\u001b[1;2;3;38;5;51m2024\u001b[0m\u001b[1;2;3;38;5;51m-\u001b[0m\u001b[1;2;3;38;5;51m11\u001b[0m\u001b[1;2;3;38;5;51m-04T\u001b[0m\u001b[1;2;3;38;5;51m23:08:03\u001b[0m\u001b[1;2;3;38;5;51m.\u001b[0m\u001b[1;2;3;38;5;51m173089\u001b[0m\n", + "\u001b[1;2;3;38;5;51m>>>>>>>>> Date and time of events: \u001b[0m\u001b[1;2;3;38;5;51m2024\u001b[0m\u001b[1;2;3;38;5;51m-\u001b[0m\u001b[1;2;3;38;5;51m11\u001b[0m\u001b[1;2;3;38;5;51m-10T\u001b[0m\u001b[1;2;3;38;5;51m23:56:29\u001b[0m\u001b[1;2;3;38;5;51m.\u001b[0m\u001b[1;2;3;38;5;51m864490\u001b[0m\n", + "\u001b[1;2;3;4;32mLisa\u001b[0m\u001b[1;2;3;32m acts: \u001b[0m\u001b[1;2;3;32m[\u001b[0m\u001b[1;2;3;32mTHINK\u001b[0m\u001b[1;2;3;32m]\u001b[0m\u001b[1;2;3;32m \u001b[0m\n", + "\u001b[1;2;3;32m > I should talk to Oscar to learn more about him.\u001b[0m\n", + "\u001b[1;2;3;38;5;51m>>>>>>>>> Date and time of events: \u001b[0m\u001b[1;2;3;38;5;51m2024\u001b[0m\u001b[1;2;3;38;5;51m-\u001b[0m\u001b[1;2;3;38;5;51m11\u001b[0m\u001b[1;2;3;38;5;51m-10T\u001b[0m\u001b[1;2;3;38;5;51m23:56:29\u001b[0m\u001b[1;2;3;38;5;51m.\u001b[0m\u001b[1;2;3;38;5;51m864490\u001b[0m\n", + "\u001b[1;2;3;4;38;5;51mLisa\u001b[0m\u001b[1;2;3;38;5;51m --> \u001b[0m\u001b[1;2;3;4;38;5;51mLisa\u001b[0m\u001b[1;2;3;38;5;51m: \u001b[0m\u001b[1;2;3;38;5;51m[\u001b[0m\u001b[1;2;3;38;5;51mTHOUGHT\u001b[0m\u001b[1;2;3;38;5;51m]\u001b[0m\u001b[1;2;3;38;5;51m \u001b[0m\n", + "\u001b[1;2;3;38;5;51m > I will now act a bit, and then issue DONE.\u001b[0m\n", + "\u001b[1;2;3;38;5;51m>>>>>>>>> Date and time of events: \u001b[0m\u001b[1;2;3;38;5;51m2024\u001b[0m\u001b[1;2;3;38;5;51m-\u001b[0m\u001b[1;2;3;38;5;51m11\u001b[0m\u001b[1;2;3;38;5;51m-10T\u001b[0m\u001b[1;2;3;38;5;51m23:56:29\u001b[0m\u001b[1;2;3;38;5;51m.\u001b[0m\u001b[1;2;3;38;5;51m864490\u001b[0m\n", "\u001b[1;2;3;4;38;5;40mLisa\u001b[0m\u001b[1;2;3;38;5;40m acts: \u001b[0m\u001b[1;2;3;38;5;40m[\u001b[0m\u001b[1;2;3;38;5;40mTALK\u001b[0m\u001b[1;2;3;38;5;40m]\u001b[0m\u001b[1;2;3;38;5;40m \u001b[0m\n", - "\u001b[1;2;3;38;5;40m > Hi Oscar, I'd love to know more about you. Could you tell me a bit about yourself?\u001b[0m\n", - "\u001b[1;2;3;38;5;51m>>>>>>>>> Date and time of events: \u001b[0m\u001b[1;2;3;38;5;51m2024\u001b[0m\u001b[1;2;3;38;5;51m-\u001b[0m\u001b[1;2;3;38;5;51m11\u001b[0m\u001b[1;2;3;38;5;51m-04T\u001b[0m\u001b[1;2;3;38;5;51m23:08:03\u001b[0m\u001b[1;2;3;38;5;51m.\u001b[0m\u001b[1;2;3;38;5;51m173089\u001b[0m\n", + "\u001b[1;2;3;38;5;40m > Hi Oscar, I was hoping to learn more about you. Could you tell me a bit about yourself?\u001b[0m\n", + "\u001b[1;2;3;38;5;51m>>>>>>>>> Date and time of events: \u001b[0m\u001b[1;2;3;38;5;51m2024\u001b[0m\u001b[1;2;3;38;5;51m-\u001b[0m\u001b[1;2;3;38;5;51m11\u001b[0m\u001b[1;2;3;38;5;51m-10T\u001b[0m\u001b[1;2;3;38;5;51m23:56:29\u001b[0m\u001b[1;2;3;38;5;51m.\u001b[0m\u001b[1;2;3;38;5;51m864490\u001b[0m\n", "\u001b[1;2;3;4;38;5;51mLisa\u001b[0m\u001b[1;2;3;38;5;51m --> \u001b[0m\u001b[1;2;3;4;38;5;51mLisa\u001b[0m\u001b[1;2;3;38;5;51m: \u001b[0m\u001b[1;2;3;38;5;51m[\u001b[0m\u001b[1;2;3;38;5;51mTHOUGHT\u001b[0m\u001b[1;2;3;38;5;51m]\u001b[0m\u001b[1;2;3;38;5;51m \u001b[0m\n", "\u001b[1;2;3;38;5;51m > I will now act a bit, and then issue DONE.\u001b[0m\n", - "\u001b[1;2;3;38;5;51m>>>>>>>>> Date and time of events: \u001b[0m\u001b[1;2;3;38;5;51m2024\u001b[0m\u001b[1;2;3;38;5;51m-\u001b[0m\u001b[1;2;3;38;5;51m11\u001b[0m\u001b[1;2;3;38;5;51m-04T\u001b[0m\u001b[1;2;3;38;5;51m23:08:03\u001b[0m\u001b[1;2;3;38;5;51m.\u001b[0m\u001b[1;2;3;38;5;51m173089\u001b[0m\n", + "\u001b[1;2;3;38;5;51m>>>>>>>>> Date and time of events: \u001b[0m\u001b[1;2;3;38;5;51m2024\u001b[0m\u001b[1;2;3;38;5;51m-\u001b[0m\u001b[1;2;3;38;5;51m11\u001b[0m\u001b[1;2;3;38;5;51m-10T\u001b[0m\u001b[1;2;3;38;5;51m23:56:29\u001b[0m\u001b[1;2;3;38;5;51m.\u001b[0m\u001b[1;2;3;38;5;51m864490\u001b[0m\n", "\u001b[1;2;3;4;38;5;252mLisa\u001b[0m\u001b[1;2;3;38;5;252m acts: \u001b[0m\u001b[1;2;3;38;5;252m[\u001b[0m\u001b[1;2;3;38;5;252mDONE\u001b[0m\u001b[1;2;3;38;5;252m]\u001b[0m\u001b[1;2;3;38;5;252m \u001b[0m\n", "\n", - "\u001b[1;2;3;38;5;51m>>>>>>>>> Date and time of events: \u001b[0m\u001b[1;2;3;38;5;51m2024\u001b[0m\u001b[1;2;3;38;5;51m-\u001b[0m\u001b[1;2;3;38;5;51m11\u001b[0m\u001b[1;2;3;38;5;51m-04T\u001b[0m\u001b[1;2;3;38;5;51m23:08:03\u001b[0m\u001b[1;2;3;38;5;51m.\u001b[0m\u001b[1;2;3;38;5;51m173089\u001b[0m\n", + "\u001b[1;2;3;38;5;51m>>>>>>>>> Date and time of events: \u001b[0m\u001b[1;2;3;38;5;51m2024\u001b[0m\u001b[1;2;3;38;5;51m-\u001b[0m\u001b[1;2;3;38;5;51m11\u001b[0m\u001b[1;2;3;38;5;51m-10T\u001b[0m\u001b[1;2;3;38;5;51m23:56:29\u001b[0m\u001b[1;2;3;38;5;51m.\u001b[0m\u001b[1;2;3;38;5;51m864490\u001b[0m\n", "\u001b[1;2;3;4;38;5;51mOscar\u001b[0m\u001b[1;2;3;38;5;51m --> \u001b[0m\u001b[1;2;3;4;38;5;51mLisa\u001b[0m\u001b[1;2;3;38;5;51m: \u001b[0m\u001b[1;2;3;38;5;51m[\u001b[0m\u001b[1;2;3;38;5;51mCONVERSATION\u001b[0m\u001b[1;2;3;38;5;51m]\u001b[0m\u001b[1;2;3;38;5;51m \u001b[0m\n", "\u001b[1;2;3;38;5;51m > Hi Lisa! Sure, I'd be happy to share a bit about myself. I'm Oscar, a \u001b[0m\u001b[1;2;3;38;5;51m30\u001b[0m\u001b[1;2;3;38;5;51m-year-old\u001b[0m\n", - "\u001b[1;2;3;38;5;51m > architect from Germany. I work at a company called Awesome Inc., where I focus on\u001b[0m\n", - "\u001b[1;2;3;38;5;51m > designing standard elements for new apartment buildings. I love modernist architecture,\u001b[0m\n", - "\u001b[1;2;3;38;5;51m > new technologies, and sustainable practices. In my free time, I enjoy traveling to\u001b[0m\n", - "\u001b[1;2;3;38;5;51m > exotic places, playing the guitar, and reading science fiction books. How about you?\u001b[0m\n", - "\u001b[1;2;3;38;5;51m>>>>>>>>> Date and time of events: \u001b[0m\u001b[1;2;3;38;5;51m2024\u001b[0m\u001b[1;2;3;38;5;51m-\u001b[0m\u001b[1;2;3;38;5;51m11\u001b[0m\u001b[1;2;3;38;5;51m-04T\u001b[0m\u001b[1;2;3;38;5;51m23:08:03\u001b[0m\u001b[1;2;3;38;5;51m.\u001b[0m\u001b[1;2;3;38;5;51m173089\u001b[0m\n", + "\u001b[1;2;3;38;5;51m > architect from Germany. I work at Awesome Inc., where I focus on designing standard\u001b[0m\n", + "\u001b[1;2;3;38;5;51m > elements for new apartment buildings. I love modernist architecture, new technologies,\u001b[0m\n", + "\u001b[1;2;3;38;5;51m > and sustainable practices. In my free time, I enjoy traveling to exotic places, playing\u001b[0m\n", + "\u001b[1;2;3;38;5;51m > the guitar, and reading science fiction. How about you?\u001b[0m\n", + "\u001b[1;2;3;38;5;51m>>>>>>>>> Date and time of events: \u001b[0m\u001b[1;2;3;38;5;51m2024\u001b[0m\u001b[1;2;3;38;5;51m-\u001b[0m\u001b[1;2;3;38;5;51m11\u001b[0m\u001b[1;2;3;38;5;51m-10T\u001b[0m\u001b[1;2;3;38;5;51m23:56:29\u001b[0m\u001b[1;2;3;38;5;51m.\u001b[0m\u001b[1;2;3;38;5;51m864490\u001b[0m\n", "\u001b[1;2;3;4;38;5;51mLisa\u001b[0m\u001b[1;2;3;38;5;51m --> \u001b[0m\u001b[1;2;3;4;38;5;51mLisa\u001b[0m\u001b[1;2;3;38;5;51m: \u001b[0m\u001b[1;2;3;38;5;51m[\u001b[0m\u001b[1;2;3;38;5;51mTHOUGHT\u001b[0m\u001b[1;2;3;38;5;51m]\u001b[0m\u001b[1;2;3;38;5;51m \u001b[0m\n", "\u001b[1;2;3;38;5;51m > I will now act a bit, and then issue DONE.\u001b[0m\n", - "\u001b[1;2;3;38;5;51m>>>>>>>>> Date and time of events: \u001b[0m\u001b[1;2;3;38;5;51m2024\u001b[0m\u001b[1;2;3;38;5;51m-\u001b[0m\u001b[1;2;3;38;5;51m11\u001b[0m\u001b[1;2;3;38;5;51m-04T\u001b[0m\u001b[1;2;3;38;5;51m23:08:03\u001b[0m\u001b[1;2;3;38;5;51m.\u001b[0m\u001b[1;2;3;38;5;51m173089\u001b[0m\n", + "\u001b[1;2;3;38;5;51m>>>>>>>>> Date and time of events: \u001b[0m\u001b[1;2;3;38;5;51m2024\u001b[0m\u001b[1;2;3;38;5;51m-\u001b[0m\u001b[1;2;3;38;5;51m11\u001b[0m\u001b[1;2;3;38;5;51m-10T\u001b[0m\u001b[1;2;3;38;5;51m23:56:29\u001b[0m\u001b[1;2;3;38;5;51m.\u001b[0m\u001b[1;2;3;38;5;51m864490\u001b[0m\n", + "\u001b[1;2;3;4;32mLisa\u001b[0m\u001b[1;2;3;32m acts: \u001b[0m\u001b[1;2;3;32m[\u001b[0m\u001b[1;2;3;32mTHINK\u001b[0m\u001b[1;2;3;32m]\u001b[0m\u001b[1;2;3;32m \u001b[0m\n", + "\u001b[1;2;3;32m > Oscar seems to have a fascinating background and interests. I should share a bit about\u001b[0m\n", + "\u001b[1;2;3;32m > myself in return.\u001b[0m\n", + "\u001b[1;2;3;38;5;51m>>>>>>>>> Date and time of events: \u001b[0m\u001b[1;2;3;38;5;51m2024\u001b[0m\u001b[1;2;3;38;5;51m-\u001b[0m\u001b[1;2;3;38;5;51m11\u001b[0m\u001b[1;2;3;38;5;51m-10T\u001b[0m\u001b[1;2;3;38;5;51m23:56:29\u001b[0m\u001b[1;2;3;38;5;51m.\u001b[0m\u001b[1;2;3;38;5;51m864490\u001b[0m\n", + "\u001b[1;2;3;4;38;5;51mLisa\u001b[0m\u001b[1;2;3;38;5;51m --> \u001b[0m\u001b[1;2;3;4;38;5;51mLisa\u001b[0m\u001b[1;2;3;38;5;51m: \u001b[0m\u001b[1;2;3;38;5;51m[\u001b[0m\u001b[1;2;3;38;5;51mTHOUGHT\u001b[0m\u001b[1;2;3;38;5;51m]\u001b[0m\u001b[1;2;3;38;5;51m \u001b[0m\n", + "\u001b[1;2;3;38;5;51m > I will now act a bit, and then issue DONE.\u001b[0m\n", + "\u001b[1;2;3;38;5;51m>>>>>>>>> Date and time of events: \u001b[0m\u001b[1;2;3;38;5;51m2024\u001b[0m\u001b[1;2;3;38;5;51m-\u001b[0m\u001b[1;2;3;38;5;51m11\u001b[0m\u001b[1;2;3;38;5;51m-10T\u001b[0m\u001b[1;2;3;38;5;51m23:56:29\u001b[0m\u001b[1;2;3;38;5;51m.\u001b[0m\u001b[1;2;3;38;5;51m864490\u001b[0m\n", "\u001b[1;2;3;4;38;5;40mLisa\u001b[0m\u001b[1;2;3;38;5;40m acts: \u001b[0m\u001b[1;2;3;38;5;40m[\u001b[0m\u001b[1;2;3;38;5;40mTALK\u001b[0m\u001b[1;2;3;38;5;40m]\u001b[0m\u001b[1;2;3;38;5;40m \u001b[0m\n", - "\u001b[1;2;3;38;5;40m > Thanks for sharing, Oscar! It's great to meet someone with such diverse interests. I'm\u001b[0m\n", - "\u001b[1;2;3;38;5;40m > Lisa, a data scientist from Canada. I work at Microsoft, focusing on improving search\u001b[0m\n", - "\u001b[1;2;3;38;5;40m > results using machine learning. In my free time, I enjoy cooking, playing the piano, and\u001b[0m\n", - "\u001b[1;2;3;38;5;40m > watching movies. It's fascinating to hear about your work in architecture and your\u001b[0m\n", - "\u001b[1;2;3;38;5;40m > passion for sustainable practices. Do you have a favorite project you've worked on?\u001b[0m\n", - "\u001b[1;2;3;38;5;51m>>>>>>>>> Date and time of events: \u001b[0m\u001b[1;2;3;38;5;51m2024\u001b[0m\u001b[1;2;3;38;5;51m-\u001b[0m\u001b[1;2;3;38;5;51m11\u001b[0m\u001b[1;2;3;38;5;51m-04T\u001b[0m\u001b[1;2;3;38;5;51m23:08:03\u001b[0m\u001b[1;2;3;38;5;51m.\u001b[0m\u001b[1;2;3;38;5;51m173089\u001b[0m\n", + "\u001b[1;2;3;38;5;40m > It's great to meet you, Oscar! I'm Lisa, a \u001b[0m\u001b[1;2;3;38;5;40m28\u001b[0m\u001b[1;2;3;38;5;40m-year-old data scientist from Canada. I\u001b[0m\n", + "\u001b[1;2;3;38;5;40m > work at Microsoft, focusing on improving search results through data analysis and\u001b[0m\n", + "\u001b[1;2;3;38;5;40m > machine learning. I love learning new things, especially in AI and natural language\u001b[0m\n", + "\u001b[1;2;3;38;5;40m > processing. In my free time, I enjoy cooking, playing the piano, and watching movies,\u001b[0m\n", + "\u001b[1;2;3;38;5;40m > particularly comedies and thrillers. It's fascinating that you enjoy modernist\u001b[0m\n", + "\u001b[1;2;3;38;5;40m > architecture and sustainable practices. Have you traveled anywhere interesting recently?\u001b[0m\n", + "\u001b[1;2;3;38;5;51m>>>>>>>>> Date and time of events: \u001b[0m\u001b[1;2;3;38;5;51m2024\u001b[0m\u001b[1;2;3;38;5;51m-\u001b[0m\u001b[1;2;3;38;5;51m11\u001b[0m\u001b[1;2;3;38;5;51m-10T\u001b[0m\u001b[1;2;3;38;5;51m23:56:29\u001b[0m\u001b[1;2;3;38;5;51m.\u001b[0m\u001b[1;2;3;38;5;51m864490\u001b[0m\n", "\u001b[1;2;3;4;38;5;51mLisa\u001b[0m\u001b[1;2;3;38;5;51m --> \u001b[0m\u001b[1;2;3;4;38;5;51mLisa\u001b[0m\u001b[1;2;3;38;5;51m: \u001b[0m\u001b[1;2;3;38;5;51m[\u001b[0m\u001b[1;2;3;38;5;51mTHOUGHT\u001b[0m\u001b[1;2;3;38;5;51m]\u001b[0m\u001b[1;2;3;38;5;51m \u001b[0m\n", "\u001b[1;2;3;38;5;51m > I will now act a bit, and then issue DONE.\u001b[0m\n", - "\u001b[1;2;3;38;5;51m>>>>>>>>> Date and time of events: \u001b[0m\u001b[1;2;3;38;5;51m2024\u001b[0m\u001b[1;2;3;38;5;51m-\u001b[0m\u001b[1;2;3;38;5;51m11\u001b[0m\u001b[1;2;3;38;5;51m-04T\u001b[0m\u001b[1;2;3;38;5;51m23:08:03\u001b[0m\u001b[1;2;3;38;5;51m.\u001b[0m\u001b[1;2;3;38;5;51m173089\u001b[0m\n", + "\u001b[1;2;3;38;5;51m>>>>>>>>> Date and time of events: \u001b[0m\u001b[1;2;3;38;5;51m2024\u001b[0m\u001b[1;2;3;38;5;51m-\u001b[0m\u001b[1;2;3;38;5;51m11\u001b[0m\u001b[1;2;3;38;5;51m-10T\u001b[0m\u001b[1;2;3;38;5;51m23:56:29\u001b[0m\u001b[1;2;3;38;5;51m.\u001b[0m\u001b[1;2;3;38;5;51m864490\u001b[0m\n", "\u001b[1;2;3;4;38;5;252mLisa\u001b[0m\u001b[1;2;3;38;5;252m acts: \u001b[0m\u001b[1;2;3;38;5;252m[\u001b[0m\u001b[1;2;3;38;5;252mDONE\u001b[0m\u001b[1;2;3;38;5;252m]\u001b[0m\u001b[1;2;3;38;5;252m \u001b[0m\n", "\n", - "\u001b[1;2;3;38;5;51m>>>>>>>>> Date and time of events: \u001b[0m\u001b[1;2;3;38;5;51m2024\u001b[0m\u001b[1;2;3;38;5;51m-\u001b[0m\u001b[1;2;3;38;5;51m11\u001b[0m\u001b[1;2;3;38;5;51m-04T\u001b[0m\u001b[1;2;3;38;5;51m23:08:03\u001b[0m\u001b[1;2;3;38;5;51m.\u001b[0m\u001b[1;2;3;38;5;51m173089\u001b[0m\n", + "\u001b[1;2;3;38;5;51m>>>>>>>>> Date and time of events: \u001b[0m\u001b[1;2;3;38;5;51m2024\u001b[0m\u001b[1;2;3;38;5;51m-\u001b[0m\u001b[1;2;3;38;5;51m11\u001b[0m\u001b[1;2;3;38;5;51m-10T\u001b[0m\u001b[1;2;3;38;5;51m23:56:29\u001b[0m\u001b[1;2;3;38;5;51m.\u001b[0m\u001b[1;2;3;38;5;51m864490\u001b[0m\n", "\u001b[1;2;3;4;38;5;51mOscar\u001b[0m\u001b[1;2;3;38;5;51m --> \u001b[0m\u001b[1;2;3;4;38;5;51mLisa\u001b[0m\u001b[1;2;3;38;5;51m: \u001b[0m\u001b[1;2;3;38;5;51m[\u001b[0m\u001b[1;2;3;38;5;51mCONVERSATION\u001b[0m\u001b[1;2;3;38;5;51m]\u001b[0m\u001b[1;2;3;38;5;51m \u001b[0m\n", - "\u001b[1;2;3;38;5;51m > Nice to meet you, Lisa! Your work sounds really interesting, especially with the focus\u001b[0m\n", - "\u001b[1;2;3;38;5;51m > on machine learning. As for my favorite project, I once worked on a sustainable housing\u001b[0m\n", - "\u001b[1;2;3;38;5;51m > development that incorporated a lot of green technologies and materials. It was\u001b[0m\n", - "\u001b[1;2;3;38;5;51m > challenging but incredibly rewarding to see it come to life. How about you? Do you have\u001b[0m\n", - "\u001b[1;2;3;38;5;51m > a favorite project or achievement in your field?\u001b[0m\n", - "\u001b[1;2;3;38;5;51m>>>>>>>>> Date and time of events: \u001b[0m\u001b[1;2;3;38;5;51m2024\u001b[0m\u001b[1;2;3;38;5;51m-\u001b[0m\u001b[1;2;3;38;5;51m11\u001b[0m\u001b[1;2;3;38;5;51m-04T\u001b[0m\u001b[1;2;3;38;5;51m23:08:03\u001b[0m\u001b[1;2;3;38;5;51m.\u001b[0m\u001b[1;2;3;38;5;51m173089\u001b[0m\n", + "\u001b[1;2;3;38;5;51m > Nice to meet you, Lisa! Your work sounds really interesting, especially with AI and\u001b[0m\n", + "\u001b[1;2;3;38;5;51m > machine learning. As for traveling, I recently visited Japan, which was an amazing\u001b[0m\n", + "\u001b[1;2;3;38;5;51m > experience. The blend of traditional and modern architecture there is truly inspiring.\u001b[0m\n", + "\u001b[1;2;3;38;5;51m > Plus, the food was incredible! How about you? Have you been on any exciting trips\u001b[0m\n", + "\u001b[1;2;3;38;5;51m > lately?\u001b[0m\n", + "\u001b[1;2;3;38;5;51m>>>>>>>>> Date and time of events: \u001b[0m\u001b[1;2;3;38;5;51m2024\u001b[0m\u001b[1;2;3;38;5;51m-\u001b[0m\u001b[1;2;3;38;5;51m11\u001b[0m\u001b[1;2;3;38;5;51m-10T\u001b[0m\u001b[1;2;3;38;5;51m23:56:29\u001b[0m\u001b[1;2;3;38;5;51m.\u001b[0m\u001b[1;2;3;38;5;51m864490\u001b[0m\n", "\u001b[1;2;3;4;38;5;51mLisa\u001b[0m\u001b[1;2;3;38;5;51m --> \u001b[0m\u001b[1;2;3;4;38;5;51mLisa\u001b[0m\u001b[1;2;3;38;5;51m: \u001b[0m\u001b[1;2;3;38;5;51m[\u001b[0m\u001b[1;2;3;38;5;51mTHOUGHT\u001b[0m\u001b[1;2;3;38;5;51m]\u001b[0m\u001b[1;2;3;38;5;51m \u001b[0m\n", "\u001b[1;2;3;38;5;51m > I will now act a bit, and then issue DONE.\u001b[0m\n", - "\u001b[1;2;3;38;5;51m>>>>>>>>> Date and time of events: \u001b[0m\u001b[1;2;3;38;5;51m2024\u001b[0m\u001b[1;2;3;38;5;51m-\u001b[0m\u001b[1;2;3;38;5;51m11\u001b[0m\u001b[1;2;3;38;5;51m-04T\u001b[0m\u001b[1;2;3;38;5;51m23:08:03\u001b[0m\u001b[1;2;3;38;5;51m.\u001b[0m\u001b[1;2;3;38;5;51m173089\u001b[0m\n", + "\u001b[1;2;3;38;5;51m>>>>>>>>> Date and time of events: \u001b[0m\u001b[1;2;3;38;5;51m2024\u001b[0m\u001b[1;2;3;38;5;51m-\u001b[0m\u001b[1;2;3;38;5;51m11\u001b[0m\u001b[1;2;3;38;5;51m-10T\u001b[0m\u001b[1;2;3;38;5;51m23:56:29\u001b[0m\u001b[1;2;3;38;5;51m.\u001b[0m\u001b[1;2;3;38;5;51m864490\u001b[0m\n", + "\u001b[1;2;3;4;32mLisa\u001b[0m\u001b[1;2;3;32m acts: \u001b[0m\u001b[1;2;3;32m[\u001b[0m\u001b[1;2;3;32mTHINK\u001b[0m\u001b[1;2;3;32m]\u001b[0m\u001b[1;2;3;32m \u001b[0m\n", + "\u001b[1;2;3;32m > Oscar's trip to Japan sounds amazing. I should share my recent travel experiences with\u001b[0m\n", + "\u001b[1;2;3;32m > him.\u001b[0m\n", + "\u001b[1;2;3;38;5;51m>>>>>>>>> Date and time of events: \u001b[0m\u001b[1;2;3;38;5;51m2024\u001b[0m\u001b[1;2;3;38;5;51m-\u001b[0m\u001b[1;2;3;38;5;51m11\u001b[0m\u001b[1;2;3;38;5;51m-10T\u001b[0m\u001b[1;2;3;38;5;51m23:56:29\u001b[0m\u001b[1;2;3;38;5;51m.\u001b[0m\u001b[1;2;3;38;5;51m864490\u001b[0m\n", + "\u001b[1;2;3;4;38;5;51mLisa\u001b[0m\u001b[1;2;3;38;5;51m --> \u001b[0m\u001b[1;2;3;4;38;5;51mLisa\u001b[0m\u001b[1;2;3;38;5;51m: \u001b[0m\u001b[1;2;3;38;5;51m[\u001b[0m\u001b[1;2;3;38;5;51mTHOUGHT\u001b[0m\u001b[1;2;3;38;5;51m]\u001b[0m\u001b[1;2;3;38;5;51m \u001b[0m\n", + "\u001b[1;2;3;38;5;51m > I will now act a bit, and then issue DONE.\u001b[0m\n", + "\u001b[1;2;3;38;5;51m>>>>>>>>> Date and time of events: \u001b[0m\u001b[1;2;3;38;5;51m2024\u001b[0m\u001b[1;2;3;38;5;51m-\u001b[0m\u001b[1;2;3;38;5;51m11\u001b[0m\u001b[1;2;3;38;5;51m-10T\u001b[0m\u001b[1;2;3;38;5;51m23:56:29\u001b[0m\u001b[1;2;3;38;5;51m.\u001b[0m\u001b[1;2;3;38;5;51m864490\u001b[0m\n", "\u001b[1;2;3;4;38;5;40mLisa\u001b[0m\u001b[1;2;3;38;5;40m acts: \u001b[0m\u001b[1;2;3;38;5;40m[\u001b[0m\u001b[1;2;3;38;5;40mTALK\u001b[0m\u001b[1;2;3;38;5;40m]\u001b[0m\u001b[1;2;3;38;5;40m \u001b[0m\n", - "\u001b[1;2;3;38;5;40m > That sounds like an amazing project, Oscar! It's inspiring to hear about sustainable\u001b[0m\n", - "\u001b[1;2;3;38;5;40m > developments. In my field, one of my favorite achievements was developing a machine\u001b[0m\n", - "\u001b[1;2;3;38;5;40m > learning model that significantly improved the relevance of search results for our\u001b[0m\n", - "\u001b[1;2;3;38;5;40m > users. It was a complex task, but seeing the positive impact it had was very fulfilling.\u001b[0m\n", - "\u001b[1;2;3;38;5;40m > I also enjoy collaborating with my team to tackle challenging data problems. It's always\u001b[0m\n", - "\u001b[1;2;3;38;5;40m > exciting to see how technology can enhance user experiences.\u001b[0m\n", - "\u001b[1;2;3;38;5;51m>>>>>>>>> Date and time of events: \u001b[0m\u001b[1;2;3;38;5;51m2024\u001b[0m\u001b[1;2;3;38;5;51m-\u001b[0m\u001b[1;2;3;38;5;51m11\u001b[0m\u001b[1;2;3;38;5;51m-04T\u001b[0m\u001b[1;2;3;38;5;51m23:08:03\u001b[0m\u001b[1;2;3;38;5;51m.\u001b[0m\u001b[1;2;3;38;5;51m173089\u001b[0m\n", + "\u001b[1;2;3;38;5;40m > Japan sounds incredible! I love how it combines the old and the new. I recently traveled\u001b[0m\n", + "\u001b[1;2;3;38;5;40m > to Italy, which was a dream come true. The history and art in places like Rome and\u001b[0m\n", + "\u001b[1;2;3;38;5;40m > Florence were breathtaking, and I couldn't get enough of the delicious pasta and gelato.\u001b[0m\n", + "\u001b[1;2;3;38;5;40m > Traveling really opens up new perspectives, doesn't it?\u001b[0m\n", + "\u001b[1;2;3;38;5;51m>>>>>>>>> Date and time of events: \u001b[0m\u001b[1;2;3;38;5;51m2024\u001b[0m\u001b[1;2;3;38;5;51m-\u001b[0m\u001b[1;2;3;38;5;51m11\u001b[0m\u001b[1;2;3;38;5;51m-10T\u001b[0m\u001b[1;2;3;38;5;51m23:56:29\u001b[0m\u001b[1;2;3;38;5;51m.\u001b[0m\u001b[1;2;3;38;5;51m864490\u001b[0m\n", "\u001b[1;2;3;4;38;5;51mLisa\u001b[0m\u001b[1;2;3;38;5;51m --> \u001b[0m\u001b[1;2;3;4;38;5;51mLisa\u001b[0m\u001b[1;2;3;38;5;51m: \u001b[0m\u001b[1;2;3;38;5;51m[\u001b[0m\u001b[1;2;3;38;5;51mTHOUGHT\u001b[0m\u001b[1;2;3;38;5;51m]\u001b[0m\u001b[1;2;3;38;5;51m \u001b[0m\n", "\u001b[1;2;3;38;5;51m > I will now act a bit, and then issue DONE.\u001b[0m\n", - "\u001b[1;2;3;38;5;51m>>>>>>>>> Date and time of events: \u001b[0m\u001b[1;2;3;38;5;51m2024\u001b[0m\u001b[1;2;3;38;5;51m-\u001b[0m\u001b[1;2;3;38;5;51m11\u001b[0m\u001b[1;2;3;38;5;51m-04T\u001b[0m\u001b[1;2;3;38;5;51m23:08:03\u001b[0m\u001b[1;2;3;38;5;51m.\u001b[0m\u001b[1;2;3;38;5;51m173089\u001b[0m\n", + "\u001b[1;2;3;38;5;51m>>>>>>>>> Date and time of events: \u001b[0m\u001b[1;2;3;38;5;51m2024\u001b[0m\u001b[1;2;3;38;5;51m-\u001b[0m\u001b[1;2;3;38;5;51m11\u001b[0m\u001b[1;2;3;38;5;51m-10T\u001b[0m\u001b[1;2;3;38;5;51m23:56:29\u001b[0m\u001b[1;2;3;38;5;51m.\u001b[0m\u001b[1;2;3;38;5;51m864490\u001b[0m\n", "\u001b[1;2;3;4;38;5;252mLisa\u001b[0m\u001b[1;2;3;38;5;252m acts: \u001b[0m\u001b[1;2;3;38;5;252m[\u001b[0m\u001b[1;2;3;38;5;252mDONE\u001b[0m\u001b[1;2;3;38;5;252m]\u001b[0m\u001b[1;2;3;38;5;252m \u001b[0m\n", "\n", - "\u001b[1;2;3;38;5;51m>>>>>>>>> Date and time of events: \u001b[0m\u001b[1;2;3;38;5;51m2024\u001b[0m\u001b[1;2;3;38;5;51m-\u001b[0m\u001b[1;2;3;38;5;51m11\u001b[0m\u001b[1;2;3;38;5;51m-04T\u001b[0m\u001b[1;2;3;38;5;51m23:08:03\u001b[0m\u001b[1;2;3;38;5;51m.\u001b[0m\u001b[1;2;3;38;5;51m173089\u001b[0m\n", + "\u001b[1;2;3;38;5;51m>>>>>>>>> Date and time of events: \u001b[0m\u001b[1;2;3;38;5;51m2024\u001b[0m\u001b[1;2;3;38;5;51m-\u001b[0m\u001b[1;2;3;38;5;51m11\u001b[0m\u001b[1;2;3;38;5;51m-10T\u001b[0m\u001b[1;2;3;38;5;51m23:56:29\u001b[0m\u001b[1;2;3;38;5;51m.\u001b[0m\u001b[1;2;3;38;5;51m864490\u001b[0m\n", "\u001b[1;2;3;4;38;5;51mOscar\u001b[0m\u001b[1;2;3;38;5;51m --> \u001b[0m\u001b[1;2;3;4;38;5;51mLisa\u001b[0m\u001b[1;2;3;38;5;51m: \u001b[0m\u001b[1;2;3;38;5;51m[\u001b[0m\u001b[1;2;3;38;5;51mCONVERSATION\u001b[0m\u001b[1;2;3;38;5;51m]\u001b[0m\u001b[1;2;3;38;5;51m \u001b[0m\n", - "\u001b[1;2;3;38;5;51m > That's impressive, Lisa! It's amazing how technology can make such a difference in\u001b[0m\n", - "\u001b[1;2;3;38;5;51m > people's lives. Collaborating with a team to solve complex problems sounds like a\u001b[0m\n", - "\u001b[1;2;3;38;5;51m > rewarding experience. It's great to hear about your passion for improving user\u001b[0m\n", - "\u001b[1;2;3;38;5;51m > experiences. Do you have any exciting projects you're currently working on?\u001b[0m\n", - "\u001b[1;2;3;38;5;51m>>>>>>>>> Date and time of events: \u001b[0m\u001b[1;2;3;38;5;51m2024\u001b[0m\u001b[1;2;3;38;5;51m-\u001b[0m\u001b[1;2;3;38;5;51m11\u001b[0m\u001b[1;2;3;38;5;51m-04T\u001b[0m\u001b[1;2;3;38;5;51m23:08:03\u001b[0m\u001b[1;2;3;38;5;51m.\u001b[0m\u001b[1;2;3;38;5;51m173089\u001b[0m\n", + "\u001b[1;2;3;38;5;51m > Italy is such a beautiful country! The art and history there are truly remarkable, and\u001b[0m\n", + "\u001b[1;2;3;38;5;51m > I completely agree about the food—it's hard to beat authentic Italian cuisine.\u001b[0m\n", + "\u001b[1;2;3;38;5;51m > Traveling definitely broadens our horizons and gives us a deeper appreciation for\u001b[0m\n", + "\u001b[1;2;3;38;5;51m > different cultures and ways of life. Do you have any other destinations on your travel\u001b[0m\n", + "\u001b[1;2;3;38;5;51m > bucket list?\u001b[0m\n", + "\u001b[1;2;3;38;5;51m>>>>>>>>> Date and time of events: \u001b[0m\u001b[1;2;3;38;5;51m2024\u001b[0m\u001b[1;2;3;38;5;51m-\u001b[0m\u001b[1;2;3;38;5;51m11\u001b[0m\u001b[1;2;3;38;5;51m-10T\u001b[0m\u001b[1;2;3;38;5;51m23:56:29\u001b[0m\u001b[1;2;3;38;5;51m.\u001b[0m\u001b[1;2;3;38;5;51m864490\u001b[0m\n", + "\u001b[1;2;3;4;38;5;51mLisa\u001b[0m\u001b[1;2;3;38;5;51m --> \u001b[0m\u001b[1;2;3;4;38;5;51mLisa\u001b[0m\u001b[1;2;3;38;5;51m: \u001b[0m\u001b[1;2;3;38;5;51m[\u001b[0m\u001b[1;2;3;38;5;51mTHOUGHT\u001b[0m\u001b[1;2;3;38;5;51m]\u001b[0m\u001b[1;2;3;38;5;51m \u001b[0m\n", + "\u001b[1;2;3;38;5;51m > I will now act a bit, and then issue DONE.\u001b[0m\n", + "\u001b[1;2;3;38;5;51m>>>>>>>>> Date and time of events: \u001b[0m\u001b[1;2;3;38;5;51m2024\u001b[0m\u001b[1;2;3;38;5;51m-\u001b[0m\u001b[1;2;3;38;5;51m11\u001b[0m\u001b[1;2;3;38;5;51m-10T\u001b[0m\u001b[1;2;3;38;5;51m23:56:29\u001b[0m\u001b[1;2;3;38;5;51m.\u001b[0m\u001b[1;2;3;38;5;51m864490\u001b[0m\n", + "\u001b[1;2;3;4;32mLisa\u001b[0m\u001b[1;2;3;32m acts: \u001b[0m\u001b[1;2;3;32m[\u001b[0m\u001b[1;2;3;32mTHINK\u001b[0m\u001b[1;2;3;32m]\u001b[0m\u001b[1;2;3;32m \u001b[0m\n", + "\u001b[1;2;3;32m > Oscar is right about traveling broadening our horizons. I should share some destinations\u001b[0m\n", + "\u001b[1;2;3;32m > on my travel bucket list with him.\u001b[0m\n", + "\u001b[1;2;3;38;5;51m>>>>>>>>> Date and time of events: \u001b[0m\u001b[1;2;3;38;5;51m2024\u001b[0m\u001b[1;2;3;38;5;51m-\u001b[0m\u001b[1;2;3;38;5;51m11\u001b[0m\u001b[1;2;3;38;5;51m-10T\u001b[0m\u001b[1;2;3;38;5;51m23:56:29\u001b[0m\u001b[1;2;3;38;5;51m.\u001b[0m\u001b[1;2;3;38;5;51m864490\u001b[0m\n", "\u001b[1;2;3;4;38;5;51mLisa\u001b[0m\u001b[1;2;3;38;5;51m --> \u001b[0m\u001b[1;2;3;4;38;5;51mLisa\u001b[0m\u001b[1;2;3;38;5;51m: \u001b[0m\u001b[1;2;3;38;5;51m[\u001b[0m\u001b[1;2;3;38;5;51mTHOUGHT\u001b[0m\u001b[1;2;3;38;5;51m]\u001b[0m\u001b[1;2;3;38;5;51m \u001b[0m\n", "\u001b[1;2;3;38;5;51m > I will now act a bit, and then issue DONE.\u001b[0m\n", - "\u001b[1;2;3;38;5;51m>>>>>>>>> Date and time of events: \u001b[0m\u001b[1;2;3;38;5;51m2024\u001b[0m\u001b[1;2;3;38;5;51m-\u001b[0m\u001b[1;2;3;38;5;51m11\u001b[0m\u001b[1;2;3;38;5;51m-04T\u001b[0m\u001b[1;2;3;38;5;51m23:08:03\u001b[0m\u001b[1;2;3;38;5;51m.\u001b[0m\u001b[1;2;3;38;5;51m173089\u001b[0m\n", + "\u001b[1;2;3;38;5;51m>>>>>>>>> Date and time of events: \u001b[0m\u001b[1;2;3;38;5;51m2024\u001b[0m\u001b[1;2;3;38;5;51m-\u001b[0m\u001b[1;2;3;38;5;51m11\u001b[0m\u001b[1;2;3;38;5;51m-10T\u001b[0m\u001b[1;2;3;38;5;51m23:56:29\u001b[0m\u001b[1;2;3;38;5;51m.\u001b[0m\u001b[1;2;3;38;5;51m864490\u001b[0m\n", "\u001b[1;2;3;4;38;5;40mLisa\u001b[0m\u001b[1;2;3;38;5;40m acts: \u001b[0m\u001b[1;2;3;38;5;40m[\u001b[0m\u001b[1;2;3;38;5;40mTALK\u001b[0m\u001b[1;2;3;38;5;40m]\u001b[0m\u001b[1;2;3;38;5;40m \u001b[0m\n", - "\u001b[1;2;3;38;5;40m > Thank you, Oscar! I'm currently working on a project that involves enhancing natural\u001b[0m\n", - "\u001b[1;2;3;38;5;40m > language understanding in search queries. It's a challenging but exciting area, as it\u001b[0m\n", - "\u001b[1;2;3;38;5;40m > involves making search engines more intuitive and user-friendly. We're also exploring\u001b[0m\n", - "\u001b[1;2;3;38;5;40m > ways to incorporate more user feedback into our models to continuously improve the\u001b[0m\n", - "\u001b[1;2;3;38;5;40m > search experience. It's a dynamic field, and I'm always eager to see how our work can\u001b[0m\n", - "\u001b[1;2;3;38;5;40m > make a positive impact.\u001b[0m\n", - "\u001b[1;2;3;38;5;51m>>>>>>>>> Date and time of events: \u001b[0m\u001b[1;2;3;38;5;51m2024\u001b[0m\u001b[1;2;3;38;5;51m-\u001b[0m\u001b[1;2;3;38;5;51m11\u001b[0m\u001b[1;2;3;38;5;51m-04T\u001b[0m\u001b[1;2;3;38;5;51m23:08:03\u001b[0m\u001b[1;2;3;38;5;51m.\u001b[0m\u001b[1;2;3;38;5;51m173089\u001b[0m\n", + "\u001b[1;2;3;38;5;40m > I have a few places on my travel bucket list! I'd love to visit New Zealand for its\u001b[0m\n", + "\u001b[1;2;3;38;5;40m > stunning landscapes and outdoor adventures. Greece is also high on my list for its rich\u001b[0m\n", + "\u001b[1;2;3;38;5;40m > history and beautiful islands. And, of course, I'd love to explore more of Asia,\u001b[0m\n", + "\u001b[1;2;3;38;5;40m > especially South Korea and Thailand. How about you? Any dream destinations you haven't\u001b[0m\n", + "\u001b[1;2;3;38;5;40m > visited yet?\u001b[0m\n", + "\u001b[1;2;3;38;5;51m>>>>>>>>> Date and time of events: \u001b[0m\u001b[1;2;3;38;5;51m2024\u001b[0m\u001b[1;2;3;38;5;51m-\u001b[0m\u001b[1;2;3;38;5;51m11\u001b[0m\u001b[1;2;3;38;5;51m-10T\u001b[0m\u001b[1;2;3;38;5;51m23:56:29\u001b[0m\u001b[1;2;3;38;5;51m.\u001b[0m\u001b[1;2;3;38;5;51m864490\u001b[0m\n", "\u001b[1;2;3;4;38;5;51mLisa\u001b[0m\u001b[1;2;3;38;5;51m --> \u001b[0m\u001b[1;2;3;4;38;5;51mLisa\u001b[0m\u001b[1;2;3;38;5;51m: \u001b[0m\u001b[1;2;3;38;5;51m[\u001b[0m\u001b[1;2;3;38;5;51mTHOUGHT\u001b[0m\u001b[1;2;3;38;5;51m]\u001b[0m\u001b[1;2;3;38;5;51m \u001b[0m\n", "\u001b[1;2;3;38;5;51m > I will now act a bit, and then issue DONE.\u001b[0m\n", - "\u001b[1;2;3;38;5;51m>>>>>>>>> Date and time of events: \u001b[0m\u001b[1;2;3;38;5;51m2024\u001b[0m\u001b[1;2;3;38;5;51m-\u001b[0m\u001b[1;2;3;38;5;51m11\u001b[0m\u001b[1;2;3;38;5;51m-04T\u001b[0m\u001b[1;2;3;38;5;51m23:08:03\u001b[0m\u001b[1;2;3;38;5;51m.\u001b[0m\u001b[1;2;3;38;5;51m173089\u001b[0m\n", + "\u001b[1;2;3;38;5;51m>>>>>>>>> Date and time of events: \u001b[0m\u001b[1;2;3;38;5;51m2024\u001b[0m\u001b[1;2;3;38;5;51m-\u001b[0m\u001b[1;2;3;38;5;51m11\u001b[0m\u001b[1;2;3;38;5;51m-10T\u001b[0m\u001b[1;2;3;38;5;51m23:56:29\u001b[0m\u001b[1;2;3;38;5;51m.\u001b[0m\u001b[1;2;3;38;5;51m864490\u001b[0m\n", "\u001b[1;2;3;4;38;5;252mLisa\u001b[0m\u001b[1;2;3;38;5;252m acts: \u001b[0m\u001b[1;2;3;38;5;252m[\u001b[0m\u001b[1;2;3;38;5;252mDONE\u001b[0m\u001b[1;2;3;38;5;252m]\u001b[0m\u001b[1;2;3;38;5;252m \u001b[0m\n", "\n", - "\u001b[1;2;3;38;5;51m>>>>>>>>> Date and time of events: \u001b[0m\u001b[1;2;3;38;5;51m2024\u001b[0m\u001b[1;2;3;38;5;51m-\u001b[0m\u001b[1;2;3;38;5;51m11\u001b[0m\u001b[1;2;3;38;5;51m-04T\u001b[0m\u001b[1;2;3;38;5;51m23:08:03\u001b[0m\u001b[1;2;3;38;5;51m.\u001b[0m\u001b[1;2;3;38;5;51m173089\u001b[0m\n", + "\u001b[1;2;3;38;5;51m>>>>>>>>> Date and time of events: \u001b[0m\u001b[1;2;3;38;5;51m2024\u001b[0m\u001b[1;2;3;38;5;51m-\u001b[0m\u001b[1;2;3;38;5;51m11\u001b[0m\u001b[1;2;3;38;5;51m-10T\u001b[0m\u001b[1;2;3;38;5;51m23:56:29\u001b[0m\u001b[1;2;3;38;5;51m.\u001b[0m\u001b[1;2;3;38;5;51m864490\u001b[0m\n", "\u001b[1;2;3;4;38;5;51mOscar\u001b[0m\u001b[1;2;3;38;5;51m --> \u001b[0m\u001b[1;2;3;4;38;5;51mLisa\u001b[0m\u001b[1;2;3;38;5;51m: \u001b[0m\u001b[1;2;3;38;5;51m[\u001b[0m\u001b[1;2;3;38;5;51mCONVERSATION\u001b[0m\u001b[1;2;3;38;5;51m]\u001b[0m\u001b[1;2;3;38;5;51m \u001b[0m\n", - "\u001b[1;2;3;38;5;51m > That sounds like a fascinating project, Lisa! Enhancing natural language understanding\u001b[0m\n", - "\u001b[1;2;3;38;5;51m > is such an important area, and it's great to hear that you're working on making search\u001b[0m\n", - "\u001b[1;2;3;38;5;51m > engines more intuitive. Incorporating user feedback sounds like a smart approach to\u001b[0m\n", - "\u001b[1;2;3;38;5;51m > keep improving the experience. It's exciting to be part of a field that's constantly\u001b[0m\n", - "\u001b[1;2;3;38;5;51m > evolving and making a real difference. Keep up the great work!\u001b[0m\n" + "\u001b[1;2;3;38;5;51m > Those sound like amazing destinations! New Zealand's landscapes are indeed\u001b[0m\n", + "\u001b[1;2;3;38;5;51m > breathtaking, and Greece's history is fascinating. For me, I'd love to visit Iceland\u001b[0m\n", + "\u001b[1;2;3;38;5;51m > for its unique natural wonders like the Northern Lights and geysers. South America is\u001b[0m\n", + "\u001b[1;2;3;38;5;51m > also on my list, particularly Peru for Machu Picchu and Brazil for its vibrant culture.\u001b[0m\n", + "\u001b[1;2;3;38;5;51m > Traveling is such a wonderful way to experience the world!\u001b[0m\n" ] }, "metadata": {}, @@ -1226,196 +1455,194 @@ } ], "source": [ - "agent_1.pp_current_interactions()" + "lisa.pp_current_interactions()" ] }, { "cell_type": "code", - "execution_count": 6, + "execution_count": 7, "metadata": {}, "outputs": [ { "data": { "text/html": [ - "
    >>>>>>>>> Date and time of events: 2024-11-04T23:08:03.173089\n",
    +       "
    >>>>>>>>> Date and time of events: 2024-11-10T23:56:29.864490\n",
            "Lisa --> Oscar: [CONVERSATION] \n",
    -       "          > Hi Oscar, I'd love to know more about you. Could you tell me a bit about yourself?\n",
    -       ">>>>>>>>> Date and time of events: 2024-11-04T23:08:03.173089\n",
    +       "          > Hi Oscar, I was hoping to learn more about you. Could you tell me a bit about yourself?\n",
    +       ">>>>>>>>> Date and time of events: 2024-11-10T23:56:29.864490\n",
            "Oscar --> Oscar: [THOUGHT] \n",
            "           > I will now act a bit, and then issue DONE.\n",
    -       ">>>>>>>>> Date and time of events: 2024-11-04T23:08:03.173089\n",
    +       ">>>>>>>>> Date and time of events: 2024-11-10T23:56:29.864490\n",
            "Oscar acts: [TALK] \n",
            "           > Hi Lisa! Sure, I'd be happy to share a bit about myself. I'm Oscar, a 30-year-old\n",
    -       "           > architect from Germany. I work at a company called Awesome Inc., where I focus on\n",
    -       "           > designing standard elements for new apartment buildings. I love modernist architecture,\n",
    -       "           > new technologies, and sustainable practices. In my free time, I enjoy traveling to\n",
    -       "           > exotic places, playing the guitar, and reading science fiction books. How about you?\n",
    -       ">>>>>>>>> Date and time of events: 2024-11-04T23:08:03.173089\n",
    +       "           > architect from Germany. I work at Awesome Inc., where I focus on designing standard\n",
    +       "           > elements for new apartment buildings. I love modernist architecture, new technologies,\n",
    +       "           > and sustainable practices. In my free time, I enjoy traveling to exotic places, playing\n",
    +       "           > the guitar, and reading science fiction. How about you?\n",
    +       ">>>>>>>>> Date and time of events: 2024-11-10T23:56:29.864490\n",
            "Oscar --> Oscar: [THOUGHT] \n",
            "           > I will now act a bit, and then issue DONE.\n",
    -       ">>>>>>>>> Date and time of events: 2024-11-04T23:08:03.173089\n",
    +       ">>>>>>>>> Date and time of events: 2024-11-10T23:56:29.864490\n",
            "Oscar acts: [DONE] \n",
            "\n",
    -       ">>>>>>>>> Date and time of events: 2024-11-04T23:08:03.173089\n",
    +       ">>>>>>>>> Date and time of events: 2024-11-10T23:56:29.864490\n",
            "Lisa --> Oscar: [CONVERSATION] \n",
    -       "          > Thanks for sharing, Oscar! It's great to meet someone with such diverse interests. I'm\n",
    -       "          > Lisa, a data scientist from Canada. I work at Microsoft, focusing on improving search\n",
    -       "          > results using machine learning. In my free time, I enjoy cooking, playing the piano, and\n",
    -       "          > watching movies. It's fascinating to hear about your work in architecture and your\n",
    -       "          > passion for sustainable practices. Do you have a favorite project you've worked on?\n",
    -       ">>>>>>>>> Date and time of events: 2024-11-04T23:08:03.173089\n",
    +       "          > It's great to meet you, Oscar! I'm Lisa, a 28-year-old data scientist from Canada. I\n",
    +       "          > work at Microsoft, focusing on improving search results through data analysis and\n",
    +       "          > machine learning. I love learning new things, especially in AI and natural language\n",
    +       "          > processing. In my free time, I enjoy cooking, playing the piano, and watching movies,\n",
    +       "          > particularly comedies and thrillers. It's fascinating that you enjoy modernist\n",
    +       "          > architecture and sustainable practices. Have you traveled anywhere interesting recently?\n",
    +       ">>>>>>>>> Date and time of events: 2024-11-10T23:56:29.864490\n",
            "Oscar --> Oscar: [THOUGHT] \n",
            "           > I will now act a bit, and then issue DONE.\n",
    -       ">>>>>>>>> Date and time of events: 2024-11-04T23:08:03.173089\n",
    +       ">>>>>>>>> Date and time of events: 2024-11-10T23:56:29.864490\n",
            "Oscar acts: [TALK] \n",
    -       "           > Nice to meet you, Lisa! Your work sounds really interesting, especially with the focus\n",
    -       "           > on machine learning. As for my favorite project, I once worked on a sustainable housing\n",
    -       "           > development that incorporated a lot of green technologies and materials. It was\n",
    -       "           > challenging but incredibly rewarding to see it come to life. How about you? Do you have\n",
    -       "           > a favorite project or achievement in your field?\n",
    -       ">>>>>>>>> Date and time of events: 2024-11-04T23:08:03.173089\n",
    +       "           > Nice to meet you, Lisa! Your work sounds really interesting, especially with AI and\n",
    +       "           > machine learning. As for traveling, I recently visited Japan, which was an amazing\n",
    +       "           > experience. The blend of traditional and modern architecture there is truly inspiring.\n",
    +       "           > Plus, the food was incredible! How about you? Have you been on any exciting trips\n",
    +       "           > lately?\n",
    +       ">>>>>>>>> Date and time of events: 2024-11-10T23:56:29.864490\n",
            "Oscar --> Oscar: [THOUGHT] \n",
            "           > I will now act a bit, and then issue DONE.\n",
    -       ">>>>>>>>> Date and time of events: 2024-11-04T23:08:03.173089\n",
    +       ">>>>>>>>> Date and time of events: 2024-11-10T23:56:29.864490\n",
            "Oscar acts: [DONE] \n",
            "\n",
    -       ">>>>>>>>> Date and time of events: 2024-11-04T23:08:03.173089\n",
    +       ">>>>>>>>> Date and time of events: 2024-11-10T23:56:29.864490\n",
            "Lisa --> Oscar: [CONVERSATION] \n",
    -       "          > That sounds like an amazing project, Oscar! It's inspiring to hear about sustainable\n",
    -       "          > developments. In my field, one of my favorite achievements was developing a machine\n",
    -       "          > learning model that significantly improved the relevance of search results for our\n",
    -       "          > users. It was a complex task, but seeing the positive impact it had was very fulfilling.\n",
    -       "          > I also enjoy collaborating with my team to tackle challenging data problems. It's always\n",
    -       "          > exciting to see how technology can enhance user experiences.\n",
    -       ">>>>>>>>> Date and time of events: 2024-11-04T23:08:03.173089\n",
    +       "          > Japan sounds incredible! I love how it combines the old and the new. I recently traveled\n",
    +       "          > to Italy, which was a dream come true. The history and art in places like Rome and\n",
    +       "          > Florence were breathtaking, and I couldn't get enough of the delicious pasta and gelato.\n",
    +       "          > Traveling really opens up new perspectives, doesn't it?\n",
    +       ">>>>>>>>> Date and time of events: 2024-11-10T23:56:29.864490\n",
            "Oscar --> Oscar: [THOUGHT] \n",
            "           > I will now act a bit, and then issue DONE.\n",
    -       ">>>>>>>>> Date and time of events: 2024-11-04T23:08:03.173089\n",
    +       ">>>>>>>>> Date and time of events: 2024-11-10T23:56:29.864490\n",
            "Oscar acts: [TALK] \n",
    -       "           > That's impressive, Lisa! It's amazing how technology can make such a difference in\n",
    -       "           > people's lives. Collaborating with a team to solve complex problems sounds like a\n",
    -       "           > rewarding experience. It's great to hear about your passion for improving user\n",
    -       "           > experiences. Do you have any exciting projects you're currently working on?\n",
    -       ">>>>>>>>> Date and time of events: 2024-11-04T23:08:03.173089\n",
    +       "           > Italy is such a beautiful country! The art and history there are truly remarkable, and\n",
    +       "           > I completely agree about the food—it's hard to beat authentic Italian cuisine.\n",
    +       "           > Traveling definitely broadens our horizons and gives us a deeper appreciation for\n",
    +       "           > different cultures and ways of life. Do you have any other destinations on your travel\n",
    +       "           > bucket list?\n",
    +       ">>>>>>>>> Date and time of events: 2024-11-10T23:56:29.864490\n",
            "Oscar --> Oscar: [THOUGHT] \n",
            "           > I will now act a bit, and then issue DONE.\n",
    -       ">>>>>>>>> Date and time of events: 2024-11-04T23:08:03.173089\n",
    +       ">>>>>>>>> Date and time of events: 2024-11-10T23:56:29.864490\n",
            "Oscar acts: [DONE] \n",
            "\n",
    -       ">>>>>>>>> Date and time of events: 2024-11-04T23:08:03.173089\n",
    +       ">>>>>>>>> Date and time of events: 2024-11-10T23:56:29.864490\n",
            "Lisa --> Oscar: [CONVERSATION] \n",
    -       "          > Thank you, Oscar! I'm currently working on a project that involves enhancing natural\n",
    -       "          > language understanding in search queries. It's a challenging but exciting area, as it\n",
    -       "          > involves making search engines more intuitive and user-friendly. We're also exploring\n",
    -       "          > ways to incorporate more user feedback into our models to continuously improve the\n",
    -       "          > search experience. It's a dynamic field, and I'm always eager to see how our work can\n",
    -       "          > make a positive impact.\n",
    -       ">>>>>>>>> Date and time of events: 2024-11-04T23:08:03.173089\n",
    +       "          > I have a few places on my travel bucket list! I'd love to visit New Zealand for its\n",
    +       "          > stunning landscapes and outdoor adventures. Greece is also high on my list for its rich\n",
    +       "          > history and beautiful islands. And, of course, I'd love to explore more of Asia,\n",
    +       "          > especially South Korea and Thailand. How about you? Any dream destinations you haven't\n",
    +       "          > visited yet?\n",
    +       ">>>>>>>>> Date and time of events: 2024-11-10T23:56:29.864490\n",
            "Oscar --> Oscar: [THOUGHT] \n",
            "           > I will now act a bit, and then issue DONE.\n",
    -       ">>>>>>>>> Date and time of events: 2024-11-04T23:08:03.173089\n",
    +       ">>>>>>>>> Date and time of events: 2024-11-10T23:56:29.864490\n",
            "Oscar acts: [TALK] \n",
    -       "           > That sounds like a fascinating project, Lisa! Enhancing natural language understanding\n",
    -       "           > is such an important area, and it's great to hear that you're working on making search\n",
    -       "           > engines more intuitive. Incorporating user feedback sounds like a smart approach to\n",
    -       "           > keep improving the experience. It's exciting to be part of a field that's constantly\n",
    -       "           > evolving and making a real difference. Keep up the great work!\n",
    -       ">>>>>>>>> Date and time of events: 2024-11-04T23:08:03.173089\n",
    +       "           > Those sound like amazing destinations! New Zealand's landscapes are indeed\n",
    +       "           > breathtaking, and Greece's history is fascinating. For me, I'd love to visit Iceland\n",
    +       "           > for its unique natural wonders like the Northern Lights and geysers. South America is\n",
    +       "           > also on my list, particularly Peru for Machu Picchu and Brazil for its vibrant culture.\n",
    +       "           > Traveling is such a wonderful way to experience the world!\n",
    +       ">>>>>>>>> Date and time of events: 2024-11-10T23:56:29.864490\n",
            "Oscar --> Oscar: [THOUGHT] \n",
            "           > I will now act a bit, and then issue DONE.\n",
    -       ">>>>>>>>> Date and time of events: 2024-11-04T23:08:03.173089\n",
    +       ">>>>>>>>> Date and time of events: 2024-11-10T23:56:29.864490\n",
            "Oscar acts: [DONE] \n",
            "\n",
            "
    \n" ], "text/plain": [ - ">>>>>>>>> Date and time of events: \u001b[1;36m2024\u001b[0m-\u001b[1;36m11\u001b[0m-04T\u001b[1;92m23:08:03\u001b[0m.\u001b[1;36m173089\u001b[0m\n", + ">>>>>>>>> Date and time of events: \u001b[1;36m2024\u001b[0m-\u001b[1;36m11\u001b[0m-10T\u001b[1;92m23:56:29\u001b[0m.\u001b[1;36m864490\u001b[0m\n", "\u001b[1;3;4;38;5;51mLisa\u001b[0m\u001b[1;3;38;5;51m --> \u001b[0m\u001b[1;3;4;38;5;51mOscar\u001b[0m\u001b[1;3;38;5;51m: \u001b[0m\u001b[1;3;38;5;51m[\u001b[0m\u001b[1;3;38;5;51mCONVERSATION\u001b[0m\u001b[1;3;38;5;51m]\u001b[0m\u001b[1;3;38;5;51m \u001b[0m\n", - "\u001b[1;3;38;5;51m > Hi Oscar, I'd love to know more about you. Could you tell me a bit about yourself?\u001b[0m\n", - "\u001b[1;3;38;5;51m>>>>>>>>> Date and time of events: \u001b[0m\u001b[1;3;38;5;51m2024\u001b[0m\u001b[1;3;38;5;51m-\u001b[0m\u001b[1;3;38;5;51m11\u001b[0m\u001b[1;3;38;5;51m-04T\u001b[0m\u001b[1;3;38;5;51m23:08:03\u001b[0m\u001b[1;3;38;5;51m.\u001b[0m\u001b[1;3;38;5;51m173089\u001b[0m\n", + "\u001b[1;3;38;5;51m > Hi Oscar, I was hoping to learn more about you. Could you tell me a bit about yourself?\u001b[0m\n", + "\u001b[1;3;38;5;51m>>>>>>>>> Date and time of events: \u001b[0m\u001b[1;3;38;5;51m2024\u001b[0m\u001b[1;3;38;5;51m-\u001b[0m\u001b[1;3;38;5;51m11\u001b[0m\u001b[1;3;38;5;51m-10T\u001b[0m\u001b[1;3;38;5;51m23:56:29\u001b[0m\u001b[1;3;38;5;51m.\u001b[0m\u001b[1;3;38;5;51m864490\u001b[0m\n", "\u001b[1;2;3;4;38;5;51mOscar\u001b[0m\u001b[1;2;3;38;5;51m --> \u001b[0m\u001b[1;2;3;4;38;5;51mOscar\u001b[0m\u001b[1;2;3;38;5;51m: \u001b[0m\u001b[1;2;3;38;5;51m[\u001b[0m\u001b[1;2;3;38;5;51mTHOUGHT\u001b[0m\u001b[1;2;3;38;5;51m]\u001b[0m\u001b[1;2;3;38;5;51m \u001b[0m\n", "\u001b[1;2;3;38;5;51m > I will now act a bit, and then issue DONE.\u001b[0m\n", - "\u001b[1;2;3;38;5;51m>>>>>>>>> Date and time of events: \u001b[0m\u001b[1;2;3;38;5;51m2024\u001b[0m\u001b[1;2;3;38;5;51m-\u001b[0m\u001b[1;2;3;38;5;51m11\u001b[0m\u001b[1;2;3;38;5;51m-04T\u001b[0m\u001b[1;2;3;38;5;51m23:08:03\u001b[0m\u001b[1;2;3;38;5;51m.\u001b[0m\u001b[1;2;3;38;5;51m173089\u001b[0m\n", + "\u001b[1;2;3;38;5;51m>>>>>>>>> Date and time of events: \u001b[0m\u001b[1;2;3;38;5;51m2024\u001b[0m\u001b[1;2;3;38;5;51m-\u001b[0m\u001b[1;2;3;38;5;51m11\u001b[0m\u001b[1;2;3;38;5;51m-10T\u001b[0m\u001b[1;2;3;38;5;51m23:56:29\u001b[0m\u001b[1;2;3;38;5;51m.\u001b[0m\u001b[1;2;3;38;5;51m864490\u001b[0m\n", "\u001b[1;2;3;4;38;5;40mOscar\u001b[0m\u001b[1;2;3;38;5;40m acts: \u001b[0m\u001b[1;2;3;38;5;40m[\u001b[0m\u001b[1;2;3;38;5;40mTALK\u001b[0m\u001b[1;2;3;38;5;40m]\u001b[0m\u001b[1;2;3;38;5;40m \u001b[0m\n", "\u001b[1;2;3;38;5;40m > Hi Lisa! Sure, I'd be happy to share a bit about myself. I'm Oscar, a \u001b[0m\u001b[1;2;3;38;5;40m30\u001b[0m\u001b[1;2;3;38;5;40m-year-old\u001b[0m\n", - "\u001b[1;2;3;38;5;40m > architect from Germany. I work at a company called Awesome Inc., where I focus on\u001b[0m\n", - "\u001b[1;2;3;38;5;40m > designing standard elements for new apartment buildings. I love modernist architecture,\u001b[0m\n", - "\u001b[1;2;3;38;5;40m > new technologies, and sustainable practices. In my free time, I enjoy traveling to\u001b[0m\n", - "\u001b[1;2;3;38;5;40m > exotic places, playing the guitar, and reading science fiction books. How about you?\u001b[0m\n", - "\u001b[1;2;3;38;5;51m>>>>>>>>> Date and time of events: \u001b[0m\u001b[1;2;3;38;5;51m2024\u001b[0m\u001b[1;2;3;38;5;51m-\u001b[0m\u001b[1;2;3;38;5;51m11\u001b[0m\u001b[1;2;3;38;5;51m-04T\u001b[0m\u001b[1;2;3;38;5;51m23:08:03\u001b[0m\u001b[1;2;3;38;5;51m.\u001b[0m\u001b[1;2;3;38;5;51m173089\u001b[0m\n", + "\u001b[1;2;3;38;5;40m > architect from Germany. I work at Awesome Inc., where I focus on designing standard\u001b[0m\n", + "\u001b[1;2;3;38;5;40m > elements for new apartment buildings. I love modernist architecture, new technologies,\u001b[0m\n", + "\u001b[1;2;3;38;5;40m > and sustainable practices. In my free time, I enjoy traveling to exotic places, playing\u001b[0m\n", + "\u001b[1;2;3;38;5;40m > the guitar, and reading science fiction. How about you?\u001b[0m\n", + "\u001b[1;2;3;38;5;51m>>>>>>>>> Date and time of events: \u001b[0m\u001b[1;2;3;38;5;51m2024\u001b[0m\u001b[1;2;3;38;5;51m-\u001b[0m\u001b[1;2;3;38;5;51m11\u001b[0m\u001b[1;2;3;38;5;51m-10T\u001b[0m\u001b[1;2;3;38;5;51m23:56:29\u001b[0m\u001b[1;2;3;38;5;51m.\u001b[0m\u001b[1;2;3;38;5;51m864490\u001b[0m\n", "\u001b[1;2;3;4;38;5;51mOscar\u001b[0m\u001b[1;2;3;38;5;51m --> \u001b[0m\u001b[1;2;3;4;38;5;51mOscar\u001b[0m\u001b[1;2;3;38;5;51m: \u001b[0m\u001b[1;2;3;38;5;51m[\u001b[0m\u001b[1;2;3;38;5;51mTHOUGHT\u001b[0m\u001b[1;2;3;38;5;51m]\u001b[0m\u001b[1;2;3;38;5;51m \u001b[0m\n", "\u001b[1;2;3;38;5;51m > I will now act a bit, and then issue DONE.\u001b[0m\n", - "\u001b[1;2;3;38;5;51m>>>>>>>>> Date and time of events: \u001b[0m\u001b[1;2;3;38;5;51m2024\u001b[0m\u001b[1;2;3;38;5;51m-\u001b[0m\u001b[1;2;3;38;5;51m11\u001b[0m\u001b[1;2;3;38;5;51m-04T\u001b[0m\u001b[1;2;3;38;5;51m23:08:03\u001b[0m\u001b[1;2;3;38;5;51m.\u001b[0m\u001b[1;2;3;38;5;51m173089\u001b[0m\n", + "\u001b[1;2;3;38;5;51m>>>>>>>>> Date and time of events: \u001b[0m\u001b[1;2;3;38;5;51m2024\u001b[0m\u001b[1;2;3;38;5;51m-\u001b[0m\u001b[1;2;3;38;5;51m11\u001b[0m\u001b[1;2;3;38;5;51m-10T\u001b[0m\u001b[1;2;3;38;5;51m23:56:29\u001b[0m\u001b[1;2;3;38;5;51m.\u001b[0m\u001b[1;2;3;38;5;51m864490\u001b[0m\n", "\u001b[1;2;3;4;38;5;252mOscar\u001b[0m\u001b[1;2;3;38;5;252m acts: \u001b[0m\u001b[1;2;3;38;5;252m[\u001b[0m\u001b[1;2;3;38;5;252mDONE\u001b[0m\u001b[1;2;3;38;5;252m]\u001b[0m\u001b[1;2;3;38;5;252m \u001b[0m\n", "\n", - "\u001b[1;2;3;38;5;51m>>>>>>>>> Date and time of events: \u001b[0m\u001b[1;2;3;38;5;51m2024\u001b[0m\u001b[1;2;3;38;5;51m-\u001b[0m\u001b[1;2;3;38;5;51m11\u001b[0m\u001b[1;2;3;38;5;51m-04T\u001b[0m\u001b[1;2;3;38;5;51m23:08:03\u001b[0m\u001b[1;2;3;38;5;51m.\u001b[0m\u001b[1;2;3;38;5;51m173089\u001b[0m\n", + "\u001b[1;2;3;38;5;51m>>>>>>>>> Date and time of events: \u001b[0m\u001b[1;2;3;38;5;51m2024\u001b[0m\u001b[1;2;3;38;5;51m-\u001b[0m\u001b[1;2;3;38;5;51m11\u001b[0m\u001b[1;2;3;38;5;51m-10T\u001b[0m\u001b[1;2;3;38;5;51m23:56:29\u001b[0m\u001b[1;2;3;38;5;51m.\u001b[0m\u001b[1;2;3;38;5;51m864490\u001b[0m\n", "\u001b[1;2;3;4;38;5;51mLisa\u001b[0m\u001b[1;2;3;38;5;51m --> \u001b[0m\u001b[1;2;3;4;38;5;51mOscar\u001b[0m\u001b[1;2;3;38;5;51m: \u001b[0m\u001b[1;2;3;38;5;51m[\u001b[0m\u001b[1;2;3;38;5;51mCONVERSATION\u001b[0m\u001b[1;2;3;38;5;51m]\u001b[0m\u001b[1;2;3;38;5;51m \u001b[0m\n", - "\u001b[1;2;3;38;5;51m > Thanks for sharing, Oscar! It's great to meet someone with such diverse interests. I'm\u001b[0m\n", - "\u001b[1;2;3;38;5;51m > Lisa, a data scientist from Canada. I work at Microsoft, focusing on improving search\u001b[0m\n", - "\u001b[1;2;3;38;5;51m > results using machine learning. In my free time, I enjoy cooking, playing the piano, and\u001b[0m\n", - "\u001b[1;2;3;38;5;51m > watching movies. It's fascinating to hear about your work in architecture and your\u001b[0m\n", - "\u001b[1;2;3;38;5;51m > passion for sustainable practices. Do you have a favorite project you've worked on?\u001b[0m\n", - "\u001b[1;2;3;38;5;51m>>>>>>>>> Date and time of events: \u001b[0m\u001b[1;2;3;38;5;51m2024\u001b[0m\u001b[1;2;3;38;5;51m-\u001b[0m\u001b[1;2;3;38;5;51m11\u001b[0m\u001b[1;2;3;38;5;51m-04T\u001b[0m\u001b[1;2;3;38;5;51m23:08:03\u001b[0m\u001b[1;2;3;38;5;51m.\u001b[0m\u001b[1;2;3;38;5;51m173089\u001b[0m\n", + "\u001b[1;2;3;38;5;51m > It's great to meet you, Oscar! I'm Lisa, a \u001b[0m\u001b[1;2;3;38;5;51m28\u001b[0m\u001b[1;2;3;38;5;51m-year-old data scientist from Canada. I\u001b[0m\n", + "\u001b[1;2;3;38;5;51m > work at Microsoft, focusing on improving search results through data analysis and\u001b[0m\n", + "\u001b[1;2;3;38;5;51m > machine learning. I love learning new things, especially in AI and natural language\u001b[0m\n", + "\u001b[1;2;3;38;5;51m > processing. In my free time, I enjoy cooking, playing the piano, and watching movies,\u001b[0m\n", + "\u001b[1;2;3;38;5;51m > particularly comedies and thrillers. It's fascinating that you enjoy modernist\u001b[0m\n", + "\u001b[1;2;3;38;5;51m > architecture and sustainable practices. Have you traveled anywhere interesting recently?\u001b[0m\n", + "\u001b[1;2;3;38;5;51m>>>>>>>>> Date and time of events: \u001b[0m\u001b[1;2;3;38;5;51m2024\u001b[0m\u001b[1;2;3;38;5;51m-\u001b[0m\u001b[1;2;3;38;5;51m11\u001b[0m\u001b[1;2;3;38;5;51m-10T\u001b[0m\u001b[1;2;3;38;5;51m23:56:29\u001b[0m\u001b[1;2;3;38;5;51m.\u001b[0m\u001b[1;2;3;38;5;51m864490\u001b[0m\n", "\u001b[1;2;3;4;38;5;51mOscar\u001b[0m\u001b[1;2;3;38;5;51m --> \u001b[0m\u001b[1;2;3;4;38;5;51mOscar\u001b[0m\u001b[1;2;3;38;5;51m: \u001b[0m\u001b[1;2;3;38;5;51m[\u001b[0m\u001b[1;2;3;38;5;51mTHOUGHT\u001b[0m\u001b[1;2;3;38;5;51m]\u001b[0m\u001b[1;2;3;38;5;51m \u001b[0m\n", "\u001b[1;2;3;38;5;51m > I will now act a bit, and then issue DONE.\u001b[0m\n", - "\u001b[1;2;3;38;5;51m>>>>>>>>> Date and time of events: \u001b[0m\u001b[1;2;3;38;5;51m2024\u001b[0m\u001b[1;2;3;38;5;51m-\u001b[0m\u001b[1;2;3;38;5;51m11\u001b[0m\u001b[1;2;3;38;5;51m-04T\u001b[0m\u001b[1;2;3;38;5;51m23:08:03\u001b[0m\u001b[1;2;3;38;5;51m.\u001b[0m\u001b[1;2;3;38;5;51m173089\u001b[0m\n", + "\u001b[1;2;3;38;5;51m>>>>>>>>> Date and time of events: \u001b[0m\u001b[1;2;3;38;5;51m2024\u001b[0m\u001b[1;2;3;38;5;51m-\u001b[0m\u001b[1;2;3;38;5;51m11\u001b[0m\u001b[1;2;3;38;5;51m-10T\u001b[0m\u001b[1;2;3;38;5;51m23:56:29\u001b[0m\u001b[1;2;3;38;5;51m.\u001b[0m\u001b[1;2;3;38;5;51m864490\u001b[0m\n", "\u001b[1;2;3;4;38;5;40mOscar\u001b[0m\u001b[1;2;3;38;5;40m acts: \u001b[0m\u001b[1;2;3;38;5;40m[\u001b[0m\u001b[1;2;3;38;5;40mTALK\u001b[0m\u001b[1;2;3;38;5;40m]\u001b[0m\u001b[1;2;3;38;5;40m \u001b[0m\n", - "\u001b[1;2;3;38;5;40m > Nice to meet you, Lisa! Your work sounds really interesting, especially with the focus\u001b[0m\n", - "\u001b[1;2;3;38;5;40m > on machine learning. As for my favorite project, I once worked on a sustainable housing\u001b[0m\n", - "\u001b[1;2;3;38;5;40m > development that incorporated a lot of green technologies and materials. It was\u001b[0m\n", - "\u001b[1;2;3;38;5;40m > challenging but incredibly rewarding to see it come to life. How about you? Do you have\u001b[0m\n", - "\u001b[1;2;3;38;5;40m > a favorite project or achievement in your field?\u001b[0m\n", - "\u001b[1;2;3;38;5;51m>>>>>>>>> Date and time of events: \u001b[0m\u001b[1;2;3;38;5;51m2024\u001b[0m\u001b[1;2;3;38;5;51m-\u001b[0m\u001b[1;2;3;38;5;51m11\u001b[0m\u001b[1;2;3;38;5;51m-04T\u001b[0m\u001b[1;2;3;38;5;51m23:08:03\u001b[0m\u001b[1;2;3;38;5;51m.\u001b[0m\u001b[1;2;3;38;5;51m173089\u001b[0m\n", + "\u001b[1;2;3;38;5;40m > Nice to meet you, Lisa! Your work sounds really interesting, especially with AI and\u001b[0m\n", + "\u001b[1;2;3;38;5;40m > machine learning. As for traveling, I recently visited Japan, which was an amazing\u001b[0m\n", + "\u001b[1;2;3;38;5;40m > experience. The blend of traditional and modern architecture there is truly inspiring.\u001b[0m\n", + "\u001b[1;2;3;38;5;40m > Plus, the food was incredible! How about you? Have you been on any exciting trips\u001b[0m\n", + "\u001b[1;2;3;38;5;40m > lately?\u001b[0m\n", + "\u001b[1;2;3;38;5;51m>>>>>>>>> Date and time of events: \u001b[0m\u001b[1;2;3;38;5;51m2024\u001b[0m\u001b[1;2;3;38;5;51m-\u001b[0m\u001b[1;2;3;38;5;51m11\u001b[0m\u001b[1;2;3;38;5;51m-10T\u001b[0m\u001b[1;2;3;38;5;51m23:56:29\u001b[0m\u001b[1;2;3;38;5;51m.\u001b[0m\u001b[1;2;3;38;5;51m864490\u001b[0m\n", "\u001b[1;2;3;4;38;5;51mOscar\u001b[0m\u001b[1;2;3;38;5;51m --> \u001b[0m\u001b[1;2;3;4;38;5;51mOscar\u001b[0m\u001b[1;2;3;38;5;51m: \u001b[0m\u001b[1;2;3;38;5;51m[\u001b[0m\u001b[1;2;3;38;5;51mTHOUGHT\u001b[0m\u001b[1;2;3;38;5;51m]\u001b[0m\u001b[1;2;3;38;5;51m \u001b[0m\n", "\u001b[1;2;3;38;5;51m > I will now act a bit, and then issue DONE.\u001b[0m\n", - "\u001b[1;2;3;38;5;51m>>>>>>>>> Date and time of events: \u001b[0m\u001b[1;2;3;38;5;51m2024\u001b[0m\u001b[1;2;3;38;5;51m-\u001b[0m\u001b[1;2;3;38;5;51m11\u001b[0m\u001b[1;2;3;38;5;51m-04T\u001b[0m\u001b[1;2;3;38;5;51m23:08:03\u001b[0m\u001b[1;2;3;38;5;51m.\u001b[0m\u001b[1;2;3;38;5;51m173089\u001b[0m\n", + "\u001b[1;2;3;38;5;51m>>>>>>>>> Date and time of events: \u001b[0m\u001b[1;2;3;38;5;51m2024\u001b[0m\u001b[1;2;3;38;5;51m-\u001b[0m\u001b[1;2;3;38;5;51m11\u001b[0m\u001b[1;2;3;38;5;51m-10T\u001b[0m\u001b[1;2;3;38;5;51m23:56:29\u001b[0m\u001b[1;2;3;38;5;51m.\u001b[0m\u001b[1;2;3;38;5;51m864490\u001b[0m\n", "\u001b[1;2;3;4;38;5;252mOscar\u001b[0m\u001b[1;2;3;38;5;252m acts: \u001b[0m\u001b[1;2;3;38;5;252m[\u001b[0m\u001b[1;2;3;38;5;252mDONE\u001b[0m\u001b[1;2;3;38;5;252m]\u001b[0m\u001b[1;2;3;38;5;252m \u001b[0m\n", "\n", - "\u001b[1;2;3;38;5;51m>>>>>>>>> Date and time of events: \u001b[0m\u001b[1;2;3;38;5;51m2024\u001b[0m\u001b[1;2;3;38;5;51m-\u001b[0m\u001b[1;2;3;38;5;51m11\u001b[0m\u001b[1;2;3;38;5;51m-04T\u001b[0m\u001b[1;2;3;38;5;51m23:08:03\u001b[0m\u001b[1;2;3;38;5;51m.\u001b[0m\u001b[1;2;3;38;5;51m173089\u001b[0m\n", + "\u001b[1;2;3;38;5;51m>>>>>>>>> Date and time of events: \u001b[0m\u001b[1;2;3;38;5;51m2024\u001b[0m\u001b[1;2;3;38;5;51m-\u001b[0m\u001b[1;2;3;38;5;51m11\u001b[0m\u001b[1;2;3;38;5;51m-10T\u001b[0m\u001b[1;2;3;38;5;51m23:56:29\u001b[0m\u001b[1;2;3;38;5;51m.\u001b[0m\u001b[1;2;3;38;5;51m864490\u001b[0m\n", "\u001b[1;2;3;4;38;5;51mLisa\u001b[0m\u001b[1;2;3;38;5;51m --> \u001b[0m\u001b[1;2;3;4;38;5;51mOscar\u001b[0m\u001b[1;2;3;38;5;51m: \u001b[0m\u001b[1;2;3;38;5;51m[\u001b[0m\u001b[1;2;3;38;5;51mCONVERSATION\u001b[0m\u001b[1;2;3;38;5;51m]\u001b[0m\u001b[1;2;3;38;5;51m \u001b[0m\n", - "\u001b[1;2;3;38;5;51m > That sounds like an amazing project, Oscar! It's inspiring to hear about sustainable\u001b[0m\n", - "\u001b[1;2;3;38;5;51m > developments. In my field, one of my favorite achievements was developing a machine\u001b[0m\n", - "\u001b[1;2;3;38;5;51m > learning model that significantly improved the relevance of search results for our\u001b[0m\n", - "\u001b[1;2;3;38;5;51m > users. It was a complex task, but seeing the positive impact it had was very fulfilling.\u001b[0m\n", - "\u001b[1;2;3;38;5;51m > I also enjoy collaborating with my team to tackle challenging data problems. It's always\u001b[0m\n", - "\u001b[1;2;3;38;5;51m > exciting to see how technology can enhance user experiences.\u001b[0m\n", - "\u001b[1;2;3;38;5;51m>>>>>>>>> Date and time of events: \u001b[0m\u001b[1;2;3;38;5;51m2024\u001b[0m\u001b[1;2;3;38;5;51m-\u001b[0m\u001b[1;2;3;38;5;51m11\u001b[0m\u001b[1;2;3;38;5;51m-04T\u001b[0m\u001b[1;2;3;38;5;51m23:08:03\u001b[0m\u001b[1;2;3;38;5;51m.\u001b[0m\u001b[1;2;3;38;5;51m173089\u001b[0m\n", + "\u001b[1;2;3;38;5;51m > Japan sounds incredible! I love how it combines the old and the new. I recently traveled\u001b[0m\n", + "\u001b[1;2;3;38;5;51m > to Italy, which was a dream come true. The history and art in places like Rome and\u001b[0m\n", + "\u001b[1;2;3;38;5;51m > Florence were breathtaking, and I couldn't get enough of the delicious pasta and gelato.\u001b[0m\n", + "\u001b[1;2;3;38;5;51m > Traveling really opens up new perspectives, doesn't it?\u001b[0m\n", + "\u001b[1;2;3;38;5;51m>>>>>>>>> Date and time of events: \u001b[0m\u001b[1;2;3;38;5;51m2024\u001b[0m\u001b[1;2;3;38;5;51m-\u001b[0m\u001b[1;2;3;38;5;51m11\u001b[0m\u001b[1;2;3;38;5;51m-10T\u001b[0m\u001b[1;2;3;38;5;51m23:56:29\u001b[0m\u001b[1;2;3;38;5;51m.\u001b[0m\u001b[1;2;3;38;5;51m864490\u001b[0m\n", "\u001b[1;2;3;4;38;5;51mOscar\u001b[0m\u001b[1;2;3;38;5;51m --> \u001b[0m\u001b[1;2;3;4;38;5;51mOscar\u001b[0m\u001b[1;2;3;38;5;51m: \u001b[0m\u001b[1;2;3;38;5;51m[\u001b[0m\u001b[1;2;3;38;5;51mTHOUGHT\u001b[0m\u001b[1;2;3;38;5;51m]\u001b[0m\u001b[1;2;3;38;5;51m \u001b[0m\n", "\u001b[1;2;3;38;5;51m > I will now act a bit, and then issue DONE.\u001b[0m\n", - "\u001b[1;2;3;38;5;51m>>>>>>>>> Date and time of events: \u001b[0m\u001b[1;2;3;38;5;51m2024\u001b[0m\u001b[1;2;3;38;5;51m-\u001b[0m\u001b[1;2;3;38;5;51m11\u001b[0m\u001b[1;2;3;38;5;51m-04T\u001b[0m\u001b[1;2;3;38;5;51m23:08:03\u001b[0m\u001b[1;2;3;38;5;51m.\u001b[0m\u001b[1;2;3;38;5;51m173089\u001b[0m\n", + "\u001b[1;2;3;38;5;51m>>>>>>>>> Date and time of events: \u001b[0m\u001b[1;2;3;38;5;51m2024\u001b[0m\u001b[1;2;3;38;5;51m-\u001b[0m\u001b[1;2;3;38;5;51m11\u001b[0m\u001b[1;2;3;38;5;51m-10T\u001b[0m\u001b[1;2;3;38;5;51m23:56:29\u001b[0m\u001b[1;2;3;38;5;51m.\u001b[0m\u001b[1;2;3;38;5;51m864490\u001b[0m\n", "\u001b[1;2;3;4;38;5;40mOscar\u001b[0m\u001b[1;2;3;38;5;40m acts: \u001b[0m\u001b[1;2;3;38;5;40m[\u001b[0m\u001b[1;2;3;38;5;40mTALK\u001b[0m\u001b[1;2;3;38;5;40m]\u001b[0m\u001b[1;2;3;38;5;40m \u001b[0m\n", - "\u001b[1;2;3;38;5;40m > That's impressive, Lisa! It's amazing how technology can make such a difference in\u001b[0m\n", - "\u001b[1;2;3;38;5;40m > people's lives. Collaborating with a team to solve complex problems sounds like a\u001b[0m\n", - "\u001b[1;2;3;38;5;40m > rewarding experience. It's great to hear about your passion for improving user\u001b[0m\n", - "\u001b[1;2;3;38;5;40m > experiences. Do you have any exciting projects you're currently working on?\u001b[0m\n", - "\u001b[1;2;3;38;5;51m>>>>>>>>> Date and time of events: \u001b[0m\u001b[1;2;3;38;5;51m2024\u001b[0m\u001b[1;2;3;38;5;51m-\u001b[0m\u001b[1;2;3;38;5;51m11\u001b[0m\u001b[1;2;3;38;5;51m-04T\u001b[0m\u001b[1;2;3;38;5;51m23:08:03\u001b[0m\u001b[1;2;3;38;5;51m.\u001b[0m\u001b[1;2;3;38;5;51m173089\u001b[0m\n", + "\u001b[1;2;3;38;5;40m > Italy is such a beautiful country! The art and history there are truly remarkable, and\u001b[0m\n", + "\u001b[1;2;3;38;5;40m > I completely agree about the food—it's hard to beat authentic Italian cuisine.\u001b[0m\n", + "\u001b[1;2;3;38;5;40m > Traveling definitely broadens our horizons and gives us a deeper appreciation for\u001b[0m\n", + "\u001b[1;2;3;38;5;40m > different cultures and ways of life. Do you have any other destinations on your travel\u001b[0m\n", + "\u001b[1;2;3;38;5;40m > bucket list?\u001b[0m\n", + "\u001b[1;2;3;38;5;51m>>>>>>>>> Date and time of events: \u001b[0m\u001b[1;2;3;38;5;51m2024\u001b[0m\u001b[1;2;3;38;5;51m-\u001b[0m\u001b[1;2;3;38;5;51m11\u001b[0m\u001b[1;2;3;38;5;51m-10T\u001b[0m\u001b[1;2;3;38;5;51m23:56:29\u001b[0m\u001b[1;2;3;38;5;51m.\u001b[0m\u001b[1;2;3;38;5;51m864490\u001b[0m\n", "\u001b[1;2;3;4;38;5;51mOscar\u001b[0m\u001b[1;2;3;38;5;51m --> \u001b[0m\u001b[1;2;3;4;38;5;51mOscar\u001b[0m\u001b[1;2;3;38;5;51m: \u001b[0m\u001b[1;2;3;38;5;51m[\u001b[0m\u001b[1;2;3;38;5;51mTHOUGHT\u001b[0m\u001b[1;2;3;38;5;51m]\u001b[0m\u001b[1;2;3;38;5;51m \u001b[0m\n", "\u001b[1;2;3;38;5;51m > I will now act a bit, and then issue DONE.\u001b[0m\n", - "\u001b[1;2;3;38;5;51m>>>>>>>>> Date and time of events: \u001b[0m\u001b[1;2;3;38;5;51m2024\u001b[0m\u001b[1;2;3;38;5;51m-\u001b[0m\u001b[1;2;3;38;5;51m11\u001b[0m\u001b[1;2;3;38;5;51m-04T\u001b[0m\u001b[1;2;3;38;5;51m23:08:03\u001b[0m\u001b[1;2;3;38;5;51m.\u001b[0m\u001b[1;2;3;38;5;51m173089\u001b[0m\n", + "\u001b[1;2;3;38;5;51m>>>>>>>>> Date and time of events: \u001b[0m\u001b[1;2;3;38;5;51m2024\u001b[0m\u001b[1;2;3;38;5;51m-\u001b[0m\u001b[1;2;3;38;5;51m11\u001b[0m\u001b[1;2;3;38;5;51m-10T\u001b[0m\u001b[1;2;3;38;5;51m23:56:29\u001b[0m\u001b[1;2;3;38;5;51m.\u001b[0m\u001b[1;2;3;38;5;51m864490\u001b[0m\n", "\u001b[1;2;3;4;38;5;252mOscar\u001b[0m\u001b[1;2;3;38;5;252m acts: \u001b[0m\u001b[1;2;3;38;5;252m[\u001b[0m\u001b[1;2;3;38;5;252mDONE\u001b[0m\u001b[1;2;3;38;5;252m]\u001b[0m\u001b[1;2;3;38;5;252m \u001b[0m\n", "\n", - "\u001b[1;2;3;38;5;51m>>>>>>>>> Date and time of events: \u001b[0m\u001b[1;2;3;38;5;51m2024\u001b[0m\u001b[1;2;3;38;5;51m-\u001b[0m\u001b[1;2;3;38;5;51m11\u001b[0m\u001b[1;2;3;38;5;51m-04T\u001b[0m\u001b[1;2;3;38;5;51m23:08:03\u001b[0m\u001b[1;2;3;38;5;51m.\u001b[0m\u001b[1;2;3;38;5;51m173089\u001b[0m\n", + "\u001b[1;2;3;38;5;51m>>>>>>>>> Date and time of events: \u001b[0m\u001b[1;2;3;38;5;51m2024\u001b[0m\u001b[1;2;3;38;5;51m-\u001b[0m\u001b[1;2;3;38;5;51m11\u001b[0m\u001b[1;2;3;38;5;51m-10T\u001b[0m\u001b[1;2;3;38;5;51m23:56:29\u001b[0m\u001b[1;2;3;38;5;51m.\u001b[0m\u001b[1;2;3;38;5;51m864490\u001b[0m\n", "\u001b[1;2;3;4;38;5;51mLisa\u001b[0m\u001b[1;2;3;38;5;51m --> \u001b[0m\u001b[1;2;3;4;38;5;51mOscar\u001b[0m\u001b[1;2;3;38;5;51m: \u001b[0m\u001b[1;2;3;38;5;51m[\u001b[0m\u001b[1;2;3;38;5;51mCONVERSATION\u001b[0m\u001b[1;2;3;38;5;51m]\u001b[0m\u001b[1;2;3;38;5;51m \u001b[0m\n", - "\u001b[1;2;3;38;5;51m > Thank you, Oscar! I'm currently working on a project that involves enhancing natural\u001b[0m\n", - "\u001b[1;2;3;38;5;51m > language understanding in search queries. It's a challenging but exciting area, as it\u001b[0m\n", - "\u001b[1;2;3;38;5;51m > involves making search engines more intuitive and user-friendly. We're also exploring\u001b[0m\n", - "\u001b[1;2;3;38;5;51m > ways to incorporate more user feedback into our models to continuously improve the\u001b[0m\n", - "\u001b[1;2;3;38;5;51m > search experience. It's a dynamic field, and I'm always eager to see how our work can\u001b[0m\n", - "\u001b[1;2;3;38;5;51m > make a positive impact.\u001b[0m\n", - "\u001b[1;2;3;38;5;51m>>>>>>>>> Date and time of events: \u001b[0m\u001b[1;2;3;38;5;51m2024\u001b[0m\u001b[1;2;3;38;5;51m-\u001b[0m\u001b[1;2;3;38;5;51m11\u001b[0m\u001b[1;2;3;38;5;51m-04T\u001b[0m\u001b[1;2;3;38;5;51m23:08:03\u001b[0m\u001b[1;2;3;38;5;51m.\u001b[0m\u001b[1;2;3;38;5;51m173089\u001b[0m\n", + "\u001b[1;2;3;38;5;51m > I have a few places on my travel bucket list! I'd love to visit New Zealand for its\u001b[0m\n", + "\u001b[1;2;3;38;5;51m > stunning landscapes and outdoor adventures. Greece is also high on my list for its rich\u001b[0m\n", + "\u001b[1;2;3;38;5;51m > history and beautiful islands. And, of course, I'd love to explore more of Asia,\u001b[0m\n", + "\u001b[1;2;3;38;5;51m > especially South Korea and Thailand. How about you? Any dream destinations you haven't\u001b[0m\n", + "\u001b[1;2;3;38;5;51m > visited yet?\u001b[0m\n", + "\u001b[1;2;3;38;5;51m>>>>>>>>> Date and time of events: \u001b[0m\u001b[1;2;3;38;5;51m2024\u001b[0m\u001b[1;2;3;38;5;51m-\u001b[0m\u001b[1;2;3;38;5;51m11\u001b[0m\u001b[1;2;3;38;5;51m-10T\u001b[0m\u001b[1;2;3;38;5;51m23:56:29\u001b[0m\u001b[1;2;3;38;5;51m.\u001b[0m\u001b[1;2;3;38;5;51m864490\u001b[0m\n", "\u001b[1;2;3;4;38;5;51mOscar\u001b[0m\u001b[1;2;3;38;5;51m --> \u001b[0m\u001b[1;2;3;4;38;5;51mOscar\u001b[0m\u001b[1;2;3;38;5;51m: \u001b[0m\u001b[1;2;3;38;5;51m[\u001b[0m\u001b[1;2;3;38;5;51mTHOUGHT\u001b[0m\u001b[1;2;3;38;5;51m]\u001b[0m\u001b[1;2;3;38;5;51m \u001b[0m\n", "\u001b[1;2;3;38;5;51m > I will now act a bit, and then issue DONE.\u001b[0m\n", - "\u001b[1;2;3;38;5;51m>>>>>>>>> Date and time of events: \u001b[0m\u001b[1;2;3;38;5;51m2024\u001b[0m\u001b[1;2;3;38;5;51m-\u001b[0m\u001b[1;2;3;38;5;51m11\u001b[0m\u001b[1;2;3;38;5;51m-04T\u001b[0m\u001b[1;2;3;38;5;51m23:08:03\u001b[0m\u001b[1;2;3;38;5;51m.\u001b[0m\u001b[1;2;3;38;5;51m173089\u001b[0m\n", + "\u001b[1;2;3;38;5;51m>>>>>>>>> Date and time of events: \u001b[0m\u001b[1;2;3;38;5;51m2024\u001b[0m\u001b[1;2;3;38;5;51m-\u001b[0m\u001b[1;2;3;38;5;51m11\u001b[0m\u001b[1;2;3;38;5;51m-10T\u001b[0m\u001b[1;2;3;38;5;51m23:56:29\u001b[0m\u001b[1;2;3;38;5;51m.\u001b[0m\u001b[1;2;3;38;5;51m864490\u001b[0m\n", "\u001b[1;2;3;4;38;5;40mOscar\u001b[0m\u001b[1;2;3;38;5;40m acts: \u001b[0m\u001b[1;2;3;38;5;40m[\u001b[0m\u001b[1;2;3;38;5;40mTALK\u001b[0m\u001b[1;2;3;38;5;40m]\u001b[0m\u001b[1;2;3;38;5;40m \u001b[0m\n", - "\u001b[1;2;3;38;5;40m > That sounds like a fascinating project, Lisa! Enhancing natural language understanding\u001b[0m\n", - "\u001b[1;2;3;38;5;40m > is such an important area, and it's great to hear that you're working on making search\u001b[0m\n", - "\u001b[1;2;3;38;5;40m > engines more intuitive. Incorporating user feedback sounds like a smart approach to\u001b[0m\n", - "\u001b[1;2;3;38;5;40m > keep improving the experience. It's exciting to be part of a field that's constantly\u001b[0m\n", - "\u001b[1;2;3;38;5;40m > evolving and making a real difference. Keep up the great work!\u001b[0m\n", - "\u001b[1;2;3;38;5;51m>>>>>>>>> Date and time of events: \u001b[0m\u001b[1;2;3;38;5;51m2024\u001b[0m\u001b[1;2;3;38;5;51m-\u001b[0m\u001b[1;2;3;38;5;51m11\u001b[0m\u001b[1;2;3;38;5;51m-04T\u001b[0m\u001b[1;2;3;38;5;51m23:08:03\u001b[0m\u001b[1;2;3;38;5;51m.\u001b[0m\u001b[1;2;3;38;5;51m173089\u001b[0m\n", + "\u001b[1;2;3;38;5;40m > Those sound like amazing destinations! New Zealand's landscapes are indeed\u001b[0m\n", + "\u001b[1;2;3;38;5;40m > breathtaking, and Greece's history is fascinating. For me, I'd love to visit Iceland\u001b[0m\n", + "\u001b[1;2;3;38;5;40m > for its unique natural wonders like the Northern Lights and geysers. South America is\u001b[0m\n", + "\u001b[1;2;3;38;5;40m > also on my list, particularly Peru for Machu Picchu and Brazil for its vibrant culture.\u001b[0m\n", + "\u001b[1;2;3;38;5;40m > Traveling is such a wonderful way to experience the world!\u001b[0m\n", + "\u001b[1;2;3;38;5;51m>>>>>>>>> Date and time of events: \u001b[0m\u001b[1;2;3;38;5;51m2024\u001b[0m\u001b[1;2;3;38;5;51m-\u001b[0m\u001b[1;2;3;38;5;51m11\u001b[0m\u001b[1;2;3;38;5;51m-10T\u001b[0m\u001b[1;2;3;38;5;51m23:56:29\u001b[0m\u001b[1;2;3;38;5;51m.\u001b[0m\u001b[1;2;3;38;5;51m864490\u001b[0m\n", "\u001b[1;2;3;4;38;5;51mOscar\u001b[0m\u001b[1;2;3;38;5;51m --> \u001b[0m\u001b[1;2;3;4;38;5;51mOscar\u001b[0m\u001b[1;2;3;38;5;51m: \u001b[0m\u001b[1;2;3;38;5;51m[\u001b[0m\u001b[1;2;3;38;5;51mTHOUGHT\u001b[0m\u001b[1;2;3;38;5;51m]\u001b[0m\u001b[1;2;3;38;5;51m \u001b[0m\n", "\u001b[1;2;3;38;5;51m > I will now act a bit, and then issue DONE.\u001b[0m\n", - "\u001b[1;2;3;38;5;51m>>>>>>>>> Date and time of events: \u001b[0m\u001b[1;2;3;38;5;51m2024\u001b[0m\u001b[1;2;3;38;5;51m-\u001b[0m\u001b[1;2;3;38;5;51m11\u001b[0m\u001b[1;2;3;38;5;51m-04T\u001b[0m\u001b[1;2;3;38;5;51m23:08:03\u001b[0m\u001b[1;2;3;38;5;51m.\u001b[0m\u001b[1;2;3;38;5;51m173089\u001b[0m\n", + "\u001b[1;2;3;38;5;51m>>>>>>>>> Date and time of events: \u001b[0m\u001b[1;2;3;38;5;51m2024\u001b[0m\u001b[1;2;3;38;5;51m-\u001b[0m\u001b[1;2;3;38;5;51m11\u001b[0m\u001b[1;2;3;38;5;51m-10T\u001b[0m\u001b[1;2;3;38;5;51m23:56:29\u001b[0m\u001b[1;2;3;38;5;51m.\u001b[0m\u001b[1;2;3;38;5;51m864490\u001b[0m\n", "\u001b[1;2;3;4;38;5;252mOscar\u001b[0m\u001b[1;2;3;38;5;252m acts: \u001b[0m\u001b[1;2;3;38;5;252m[\u001b[0m\u001b[1;2;3;38;5;252mDONE\u001b[0m\u001b[1;2;3;38;5;252m]\u001b[0m\u001b[1;2;3;38;5;252m \u001b[0m\n", "\n" ] @@ -1425,7 +1652,7 @@ } ], "source": [ - "agent_2.pp_current_interactions()" + "oscar.pp_current_interactions()" ] }, { diff --git a/examples/synthetic_data_generation.ipynb b/examples/synthetic_data_generation.ipynb index 06b301e..1400c28 100644 --- a/examples/synthetic_data_generation.ipynb +++ b/examples/synthetic_data_generation.ipynb @@ -9,7 +9,7 @@ }, { "cell_type": "code", - "execution_count": 1, + "execution_count": null, "metadata": {}, "outputs": [ { @@ -39,9 +39,9 @@ "import tinytroupe\n", "from tinytroupe.agent import TinyPerson\n", "from tinytroupe.environment import TinyWorld, TinySocialNetwork\n", - "from tinytroupe.personfactory import TinyPersonFactory\n", + "from tinytroupe.factory import TinyPersonFactory\n", "from tinytroupe.extraction import default_extractor as extractor\n", - "from tinytroupe.extraction import InteractionResultsReducer\n", + "from tinytroupe.extraction import ResultsReducer\n", "import tinytroupe.control as control" ] }, @@ -902,11 +902,11 @@ }, { "cell_type": "code", - "execution_count": 9, + "execution_count": null, "metadata": {}, "outputs": [], "source": [ - "reducer = InteractionResultsReducer()\n", + "reducer = ResultsReducer()\n", "\n", "def aux_extract_content(focus_agent: TinyPerson, source_agent:TinyPerson, target_agent:TinyPerson, kind:str, event: str, content: str, timestamp:str):\n", "\n", diff --git a/examples/wordprocessor_tool_usage.ipynb b/examples/wordprocessor_tool_usage.ipynb index 0ce2554..6a5b3d3 100644 --- a/examples/wordprocessor_tool_usage.ipynb +++ b/examples/wordprocessor_tool_usage.ipynb @@ -10,7 +10,7 @@ }, { "cell_type": "code", - "execution_count": 1, + "execution_count": null, "metadata": {}, "outputs": [ { @@ -39,12 +39,12 @@ "\n", "import tinytroupe\n", "from tinytroupe.openai_utils import force_api_type\n", - "from tinytroupe.personfactory import TinyPersonFactory\n", - "from tinytroupe.agent import TinyPerson, ToolUse\n", + "from tinytroupe.factory import TinyPersonFactory\n", + "from tinytroupe.agent import TinyPerson, TinyToolUse\n", "from tinytroupe.environment import TinyWorld\n", "from tinytroupe import control\n", - "from tinytroupe.extraction import InteractionResultsExtractor, InteractionResultsReducer\n", - "from tinytroupe.enrichment import Enricher\n", + "from tinytroupe.extraction import ResultsExtractor, ResultsReducer\n", + "from tinytroupe.enrichment import TinyEnricher\n", "from tinytroupe.extraction import ArtifactExporter\n", "from tinytroupe.tools import TinyWordProcessor\n", "from tinytroupe.story import TinyStory\n", @@ -63,13 +63,13 @@ }, { "cell_type": "code", - "execution_count": 3, + "execution_count": null, "metadata": {}, "outputs": [], "source": [ "exporter = ArtifactExporter(base_output_folder=data_export_folder)\n", - "enricher = Enricher()\n", - "tooluse_faculty = ToolUse(tools=[TinyWordProcessor(exporter=exporter, enricher=enricher)])" + "enricher = TinyEnricher()\n", + "tooluse_faculty = TinyToolUse(tools=[TinyWordProcessor(exporter=exporter, enricher=enricher)])" ] }, { diff --git a/tests/scenarios/test_advertisement_scenarios.py b/tests/scenarios/test_advertisement_scenarios.py index 0594bfb..c4852ce 100644 --- a/tests/scenarios/test_advertisement_scenarios.py +++ b/tests/scenarios/test_advertisement_scenarios.py @@ -11,8 +11,8 @@ import tinytroupe from tinytroupe.agent import TinyPerson from tinytroupe.environment import TinyWorld, TinySocialNetwork -from tinytroupe.personfactory import TinyPersonFactory -from tinytroupe.extraction import InteractionResultsExtractor +from tinytroupe.factory import TinyPersonFactory +from tinytroupe.extraction import ResultsExtractor from tinytroupe.examples import create_lisa_the_data_scientist, create_oscar_the_architect, create_marcos_the_physician from tinytroupe.extraction import default_extractor as extractor @@ -138,7 +138,7 @@ def test_ad_evaluation_scenario(setup): person.change_context(situation) person.listen_and_act(eval_request_msg) - extractor = InteractionResultsExtractor() + extractor = ResultsExtractor() choices = [] for person in people: diff --git a/tests/scenarios/test_basic_scenarios.py b/tests/scenarios/test_basic_scenarios.py index d36f205..348bbe1 100644 --- a/tests/scenarios/test_basic_scenarios.py +++ b/tests/scenarios/test_basic_scenarios.py @@ -11,8 +11,8 @@ import tinytroupe from tinytroupe.agent import TinyPerson from tinytroupe.environment import TinyWorld, TinySocialNetwork -from tinytroupe.personfactory import TinyPersonFactory -from tinytroupe.extraction import InteractionResultsExtractor +from tinytroupe.factory import TinyPersonFactory +from tinytroupe.extraction import ResultsExtractor from tinytroupe.examples import create_lisa_the_data_scientist, create_oscar_the_architect, create_marcos_the_physician from tinytroupe.extraction import default_extractor as extractor diff --git a/tests/scenarios/test_brainstorming_scenarios.py b/tests/scenarios/test_brainstorming_scenarios.py index 25af633..4a31b14 100644 --- a/tests/scenarios/test_brainstorming_scenarios.py +++ b/tests/scenarios/test_brainstorming_scenarios.py @@ -11,8 +11,8 @@ import tinytroupe from tinytroupe.agent import TinyPerson from tinytroupe.environment import TinyWorld, TinySocialNetwork -from tinytroupe.personfactory import TinyPersonFactory -from tinytroupe.extraction import InteractionResultsExtractor +from tinytroupe.factory import TinyPersonFactory +from tinytroupe.extraction import ResultsExtractor from tinytroupe.examples import create_lisa_the_data_scientist, create_oscar_the_architect, create_marcos_the_physician from tinytroupe.extraction import default_extractor as extractor @@ -38,14 +38,14 @@ def test_brainstorming_scenario(setup, focus_group_world): agent.listen_and_act("Can you please summarize the ideas that the group came up with?") - from tinytroupe.extraction import InteractionResultsExtractor + from tinytroupe.extraction import ResultsExtractor - extractor = InteractionResultsExtractor() + extractor = ResultsExtractor() results = extractor.extract_results_from_agent(agent, extraction_objective="Summarize the the ideas that the group came up with, explaining each idea as an item of a list. Describe in details the benefits and drawbacks of each.", situation="A focus group to brainstorm ideas for a new product.") - print(results) + print("Brainstorm Results: ", results) assert proposition_holds(f"The following contains some ideas for new product features or entirely new products: '{results}'"), f"Proposition is false according to the LLM." diff --git a/tests/scenarios/test_jupyter_examples.py b/tests/scenarios/test_jupyter_examples.py new file mode 100644 index 0000000..a3d37c7 --- /dev/null +++ b/tests/scenarios/test_jupyter_examples.py @@ -0,0 +1,49 @@ +import os +import nbformat +from nbconvert.preprocessors import ExecutePreprocessor +import pytest + +import sys +sys.path.insert(0, '../../tinytroupe/') # ensures that the package is imported from the parent directory, not the Python installation +sys.path.insert(0, '../../') # ensures that the package is imported from the parent directory, not the Python installation +sys.path.insert(0, '..') # ensures that the package is imported from the parent directory, not the Python installation + +# Set the folder containing the notebooks +NOTEBOOK_FOLDER = "../examples/" # Update this path + +# Set a timeout for long-running notebooks +TIMEOUT = 600 + +KERNEL_NAME = "python3" #"py310" + + +def get_notebooks(folder): + """Retrieve all Jupyter notebook files from the specified folder.""" + return [ + os.path.join(folder, f) + for f in os.listdir(folder) + if f.endswith(".ipynb") and not ".executed." in f and not ".local." in f + ] + +@pytest.mark.parametrize("notebook_path", get_notebooks(NOTEBOOK_FOLDER)) +def test_notebook_execution(notebook_path): + """Execute a Jupyter notebook and assert that no exceptions occur.""" + with open(notebook_path, "r", encoding="utf-8") as nb_file: + notebook = nbformat.read(nb_file, as_version=4) + print(f"Executing notebook: {notebook_path} with kernl: {KERNEL_NAME}") + ep = ExecutePreprocessor(timeout=TIMEOUT, kernel_name=KERNEL_NAME) + + try: + ep.preprocess(notebook, {'metadata': {'path': NOTEBOOK_FOLDER}}) + print(f"Notebook {notebook_path} executed successfully.") + + except Exception as e: + pytest.fail(f"Notebook {notebook_path} raised an exception: {e}") + + finally: + # save a copy of the executed notebook + output_path = notebook_path.replace(".ipynb", ".executed.local.ipynb") + with open(output_path, "w", encoding="utf-8") as out_file: + nbformat.write(notebook, out_file) + + print(f"Executed notebook saved as: {output_path}") diff --git a/tests/unit/test_control.py b/tests/unit/test_control.py index 7b2706d..2bc99b2 100644 --- a/tests/unit/test_control.py +++ b/tests/unit/test_control.py @@ -8,12 +8,12 @@ from tinytroupe.examples import create_oscar_the_architect, create_lisa_the_data_scientist -from tinytroupe.agent import TinyPerson, ToolUse +from tinytroupe.agent import TinyPerson, TinyToolUse from tinytroupe.environment import TinyWorld from tinytroupe.control import Simulation import tinytroupe.control as control -from tinytroupe.personfactory import TinyPersonFactory -from tinytroupe.enrichment import Enricher +from tinytroupe.factory import TinyPersonFactory +from tinytroupe.enrichment import TinyEnricher from tinytroupe.extraction import ArtifactExporter from tinytroupe.tools import TinyWordProcessor @@ -40,8 +40,8 @@ def test_begin_checkpoint_end_with_agent_only(setup): exporter = ArtifactExporter(base_output_folder="./synthetic_data_exports_3/") - enricher = Enricher() - tooluse_faculty = ToolUse(tools=[TinyWordProcessor(exporter=exporter, enricher=enricher)]) + enricher = TinyEnricher() + tooluse_faculty = TinyToolUse(tools=[TinyWordProcessor(exporter=exporter, enricher=enricher)]) agent_1 = create_oscar_the_architect() agent_1.add_mental_faculties([tooluse_faculty]) diff --git a/tests/unit/test_enrichment.py b/tests/unit/test_enrichment.py index cf3b501..77d64ca 100644 --- a/tests/unit/test_enrichment.py +++ b/tests/unit/test_enrichment.py @@ -11,7 +11,7 @@ from testing_utils import * -from tinytroupe.enrichment import Enricher +from tinytroupe.enrichment import TinyEnricher def test_enrich_content(): @@ -50,7 +50,7 @@ def test_enrich_content(): The result **MUST** be at least 3 times larger than the original content in terms of characters - do whatever it takes to make it this long and detailed. """).strip() - result = Enricher().enrich_content(requirements=requirements, + result = TinyEnricher().enrich_content(requirements=requirements, content=content_to_enrich, content_type="Document", context_info="WonderCode was approached by Microsoft to for a partnership.", diff --git a/tests/unit/test_personfactory.py b/tests/unit/test_factory.py similarity index 94% rename from tests/unit/test_personfactory.py rename to tests/unit/test_factory.py index 1dff6c5..f7c57f2 100644 --- a/tests/unit/test_personfactory.py +++ b/tests/unit/test_factory.py @@ -10,7 +10,7 @@ from tinytroupe.examples import create_oscar_the_architect from tinytroupe.control import Simulation import tinytroupe.control as control -from tinytroupe.personfactory import TinyPersonFactory +from tinytroupe.factory import TinyPersonFactory from testing_utils import * diff --git a/tests/unit/test_story.py b/tests/unit/test_story.py index 9fb26f4..01bdb78 100644 --- a/tests/unit/test_story.py +++ b/tests/unit/test_story.py @@ -11,8 +11,8 @@ import tinytroupe from tinytroupe.agent import TinyPerson from tinytroupe.environment import TinyWorld, TinySocialNetwork -from tinytroupe.personfactory import TinyPersonFactory -from tinytroupe.extraction import InteractionResultsExtractor +from tinytroupe.factory import TinyPersonFactory +from tinytroupe.extraction import ResultsExtractor from tinytroupe.story import TinyStory from tinytroupe.examples import create_lisa_the_data_scientist, create_oscar_the_architect, create_marcos_the_physician diff --git a/tests/unit/test_personchecker.py b/tests/unit/test_validation.py similarity index 56% rename from tests/unit/test_personchecker.py rename to tests/unit/test_validation.py index 798af18..07385bb 100644 --- a/tests/unit/test_personchecker.py +++ b/tests/unit/test_validation.py @@ -10,8 +10,8 @@ from tinytroupe.examples import create_oscar_the_architect from tinytroupe.control import Simulation import tinytroupe.control as control -from tinytroupe.personfactory import TinyPersonFactory -from tinytroupe.personchecker import TinyPersonChecker +from tinytroupe.factory import TinyPersonFactory +from tinytroupe.validation import TinyPersonValidator from testing_utils import * @@ -47,7 +47,9 @@ def test_validate_person(setup): - Is a bit of a snob - Might pretend to be a hard-core woke, but in reality that's just a facade to climb the corporate ladder """ - banker_score, banker_justification = TinyPersonChecker.validate_person(banker, expectations=banker_expectations, include_agent_spec=False, max_content_length=None) + banker_score, banker_justification = TinyPersonValidator.validate_person(banker, expectations=banker_expectations, include_agent_spec=False, max_content_length=None) + print("Banker score: ", banker_score) + print("Banker justification: ", banker_justification) assert banker_score > 0.5, f"Validation score is too low: {banker_score:.2f}" @@ -55,30 +57,32 @@ def test_validate_person(setup): ########################## # Busy Knowledge Worker ########################## - bkw_spec =\ + monk_spec =\ """ - A typical knowledge worker in a large corporation grinding his way into upper middle class. + A poor buddhist monk living alone and isolated in a remote montain. """ - bkw_factory = TinyPersonFactory(bkw_spec) - busy_knowledge_worker = bkw_factory.generate_person() - bkw_expectations =\ + monk_spec_factory = TinyPersonFactory(monk_spec) + monk = monk_spec_factory.generate_person() + monk_expectations =\ """ Some characteristics of this person: - - Very busy - - Likes to have lunch with colleagues - - To travel during vacations - - Is married and worrying about the cost of living, particularly regarding his/her children - - Has some stress issues, and potentially some psychiatric problems - - Went to college and has a degree in some technical field - - Has some very specific skills - - Does not have a wide range of interests, being more focused on his/her career, family and very few hobbies if any + - Is very poor, and in fact do not seek money + - Has no formal education, but is very wise + - Is very calm and patient + - Is very humble and does not seek attention + - Honesty is a core value """ - bkw_score, bkw_justification = TinyPersonChecker.validate_person(busy_knowledge_worker, expectations=bkw_expectations, include_agent_spec=False, max_content_length=None) + monk_score, monk_justification = TinyPersonValidator.validate_person(monk, expectations=monk_expectations, include_agent_spec=False, max_content_length=None) + print("Monk score: ", monk_score) + print("Monk justification: ", monk_justification) + - assert bkw_score > 0.5, f"Validation score is too low: {bkw_score:.2f}" + assert monk_score > 0.5, f"Validation score is too low: {monk_score:.2f}" # Now, let's check the score for the busy knowledge worker with the wrong expectations! It has to be low! - wrong_expectations_score, wrong_expectations_justification = TinyPersonChecker.validate_person(busy_knowledge_worker, expectations=banker_expectations, include_agent_spec=False, max_content_length=None) + wrong_expectations_score, wrong_expectations_justification = TinyPersonValidator.validate_person(monk, expectations=banker_expectations, include_agent_spec=False, max_content_length=None) - assert wrong_expectations_score < 0.5, f"Validation score is too high: {wrong_expectations_score:.2f}" \ No newline at end of file + assert wrong_expectations_score < 0.5, f"Validation score is too high: {wrong_expectations_score:.2f}" + print("Wrong expectations score: ", wrong_expectations_score) + print("Wrong expectations justification: ", wrong_expectations_justification) \ No newline at end of file diff --git a/tinytroupe/__init__.py b/tinytroupe/__init__.py index 6b20938..65f0889 100644 --- a/tinytroupe/__init__.py +++ b/tinytroupe/__init__.py @@ -9,13 +9,6 @@ sys.path.append('.') from tinytroupe import utils # now we can import our utils -config = utils.read_config_file() -utils.start_logger(config) - -# fix an issue in the rich library: we don't want margins in Jupyter! -rich.jupyter.JUPYTER_HTML_FORMAT = \ - utils.inject_html_css_style_prefix(rich.jupyter.JUPYTER_HTML_FORMAT, "margin:0px;") - # AI disclaimers print(\ """ @@ -24,4 +17,13 @@ The AI models are not perfect and may produce inappropriate or inacurate results. For any serious or consequential use, please review the generated content before using it. !!!! -""") \ No newline at end of file +""") + +config = utils.read_config_file() +utils.pretty_print_config(config) +utils.start_logger(config) + +# fix an issue in the rich library: we don't want margins in Jupyter! +rich.jupyter.JUPYTER_HTML_FORMAT = \ + utils.inject_html_css_style_prefix(rich.jupyter.JUPYTER_HTML_FORMAT, "margin:0px;") + diff --git a/tinytroupe/agent.py b/tinytroupe/agent.py index d25ebba..a18598d 100644 --- a/tinytroupe/agent.py +++ b/tinytroupe/agent.py @@ -1243,9 +1243,9 @@ def clear_agents(): # Mental faculties ####################################################################################################################### -class Faculty(JsonSerializableRegistry): +class TinyMentalFaculty(JsonSerializableRegistry): """ - Represents an optional mental faculty of an agent. Mental faculties are the cognitive abilities that an agent has. + Represents a mental faculty of an agent. Mental faculties are the cognitive abilities that an agent has. """ def __init__(self, name: str, requires_faculties: list=None) -> None: @@ -1264,10 +1264,10 @@ def __init__(self, name: str, requires_faculties: list=None) -> None: self.requires_faculties = requires_faculties def __str__(self) -> str: - return f"Faculty: {self.name}" + return f"Mental Faculty: {self.name}" def __eq__(self, other): - if isinstance(other, Faculty): + if isinstance(other, TinyMentalFaculty): return self.name == other.name return False @@ -1296,7 +1296,7 @@ def actions_constraints_prompt(self) -> str: raise NotImplementedError("Subclasses must implement this method.") -class RecallFaculty(Faculty): +class RecallFaculty(TinyMentalFaculty): def __init__(self): super().__init__("Memory Recall") @@ -1375,7 +1375,7 @@ def actions_constraints_prompt(self) -> str: return textwrap.dedent(prompt) -class FilesAndWebGroundingFaculty(Faculty): +class FilesAndWebGroundingFaculty(TinyMentalFaculty): """ Allows the agent to access local files and web pages to ground its knowledge. """ @@ -1452,7 +1452,7 @@ def actions_constraints_prompt(self) -> str: return textwrap.dedent(prompt) -class ToolUse(Faculty): +class TinyToolUse(TinyMentalFaculty): """ Allows the agent to use tools to accomplish tasks. Tool usage is one of the most important cognitive skills humans and primates have as we know. @@ -1491,7 +1491,7 @@ def actions_constraints_prompt(self) -> str: # Memory mechanisms ####################################################################################################################### -class Memory(Faculty): +class TinyMemory(TinyMentalFaculty): """ Base class for different types of memory. """ @@ -1537,7 +1537,7 @@ def retrieve_relevant(self, relevance_target:str, top_k=5) -> list: -class EpisodicMemory(Memory): +class EpisodicMemory(TinyMemory): """ Provides episodic memory capabilities to an agent. Cognitively, episodic memory is the ability to remember specific events, or episodes, in the past. This class provides a simple implementation of episodic memory, where the agent can store and retrieve @@ -1651,7 +1651,7 @@ def retrieve_last(self, n: int, include_omission_info:bool=True) -> list: return omisssion_info + self.memory[-n:] -class SemanticMemory(Memory): +class SemanticMemory(TinyMemory): """ Semantic memory is the memory of meanings, understandings, and other concept-based knowledge unrelated to specific experiences. It is not ordered temporally, and it is not about remembering specific events or episodes. This class provides a simple implementation diff --git a/tinytroupe/config.ini b/tinytroupe/config.ini index 57e447f..b668a1a 100644 --- a/tinytroupe/config.ini +++ b/tinytroupe/config.ini @@ -4,7 +4,6 @@ # # Default options: openai, azure -# Internal Microsoft options (only for Microsoft employees): microsoft-internal API_TYPE=openai # Check Azure's documentation for updates here: @@ -15,9 +14,6 @@ AZURE_API_VERSION=2023-05-15 # Model parameters # -# Other OpenAI models: "gpt-3.5-turbo-16k-0613" #gpt-4 #gpt-4-1106-preview #gpt-4-0613 #"gpt-4-0314" #"gpt-3.5-turbo-0613" #"gpt-3.5-turbo-16k" #"gpt-4", gpt-3.5-turbo-16k, gpt-3.5-turbo -# Other Azure OpenAI Service models (deployment-dependent, yours might vary): gpt-4-32k, gpt-4-8k -####### Internal Microsoft Polymer LLM models: prod-ppo ############ MODEL=gpt-4o MAX_TOKENS=4000 TEMPERATURE=0.3 diff --git a/tinytroupe/control.py b/tinytroupe/control.py index 4149506..2ae1092 100644 --- a/tinytroupe/control.py +++ b/tinytroupe/control.py @@ -72,7 +72,7 @@ def begin(self, cache_path:str=None, auto_checkpoint:bool=False): # local import to avoid circular dependencies from tinytroupe.agent import TinyPerson from tinytroupe.environment import TinyWorld - from tinytroupe.personfactory import TinyFactory + from tinytroupe.factory import TinyFactory if self.status == Simulation.STATUS_STOPPED: self.status = Simulation.STATUS_STARTED @@ -379,7 +379,7 @@ def __init__(self, obj_under_transaction, simulation, function, *args, **kwargs) # local import to avoid circular dependencies from tinytroupe.agent import TinyPerson from tinytroupe.environment import TinyWorld - from tinytroupe.personfactory import TinyFactory + from tinytroupe.factory import TinyFactory self.obj_under_transaction = obj_under_transaction self.simulation = simulation @@ -484,7 +484,7 @@ def _encode_function_output(self, output) -> dict: # local import to avoid circular dependencies from tinytroupe.agent import TinyPerson from tinytroupe.environment import TinyWorld - from tinytroupe.personfactory import TinyFactory + from tinytroupe.factory import TinyFactory # if the output is a TinyPerson, encode it @@ -512,7 +512,7 @@ def _decode_function_output(self, encoded_output: dict): # local import to avoid circular dependencies from tinytroupe.agent import TinyPerson from tinytroupe.environment import TinyWorld - from tinytroupe.personfactory import TinyFactory + from tinytroupe.factory import TinyFactory if encoded_output is None: return None diff --git a/tinytroupe/enrichment.py b/tinytroupe/enrichment.py index f0aa94f..e7f579e 100644 --- a/tinytroupe/enrichment.py +++ b/tinytroupe/enrichment.py @@ -7,14 +7,14 @@ from tinytroupe.agent import TinyPerson from tinytroupe.environment import TinyWorld -from tinytroupe.personfactory import TinyPersonFactory +from tinytroupe.factory import TinyPersonFactory from tinytroupe.utils import JsonSerializableRegistry from tinytroupe import openai_utils import tinytroupe.utils as utils -class Enricher(JsonSerializableRegistry): +class TinyEnricher(JsonSerializableRegistry): def __init__(self, use_past_results_in_context=False) -> None: self.use_past_results_in_context = use_past_results_in_context diff --git a/tinytroupe/extraction.py b/tinytroupe/extraction.py index 7135ad3..eba8e51 100644 --- a/tinytroupe/extraction.py +++ b/tinytroupe/extraction.py @@ -22,14 +22,14 @@ from tinytroupe.agent import TinyPerson from tinytroupe.environment import TinyWorld -from tinytroupe.personfactory import TinyPersonFactory +from tinytroupe.factory import TinyPersonFactory from tinytroupe.utils import JsonSerializableRegistry from tinytroupe import openai_utils import tinytroupe.utils as utils -class InteractionResultsExtractor: +class ResultsExtractor: def __init__(self): self._extraction_prompt_template_path = os.path.join(os.path.dirname(__file__), 'prompts/interaction_results_extractor.mustache') @@ -201,7 +201,7 @@ def save_as_json(self, filename:str, verbose:bool=False): -class InteractionResultsReducer: +class ResultsReducer: def __init__(self): self.results = {} @@ -510,4 +510,4 @@ def normalize(self, element_or_elements:Union[str, List[str]]) -> Union[str, Lis ################################################################################ # default extractor -default_extractor = InteractionResultsExtractor() \ No newline at end of file +default_extractor = ResultsExtractor() \ No newline at end of file diff --git a/tinytroupe/personfactory.py b/tinytroupe/factory.py similarity index 95% rename from tinytroupe/personfactory.py rename to tinytroupe/factory.py index 6e39647..6b7a12e 100644 --- a/tinytroupe/personfactory.py +++ b/tinytroupe/factory.py @@ -119,17 +119,17 @@ def generate_person_factories(number_of_factories, generic_context_text): logger.info(f"Starting the generation of the {number_of_factories} person factories based on that context: {generic_context_text}") - person_factories_prompt = open(os.path.join(os.path.dirname(__file__), 'prompts/generate_person_factory.md')).read() + system_prompt = open(os.path.join(os.path.dirname(__file__), 'prompts/generate_person_factory.md')).read() messages = [] - messages.append({"role": "system", "content": person_factories_prompt}) + messages.append({"role": "system", "content": system_prompt}) - prompt = chevron.render("Please, create {{number_of_factories}} person descriptions based on the following broad context: {{context}}", { + user_prompt = chevron.render("Please, create {{number_of_factories}} person descriptions based on the following broad context: {{context}}", { "number_of_factories": number_of_factories, "context": generic_context_text }) - messages.append({"role": "user", "content": prompt}) + messages.append({"role": "user", "content": user_prompt}) response = openai_utils.client().send_message(messages) diff --git a/tinytroupe/tools.py b/tinytroupe/tools.py index 3a24898..0eebb0d 100644 --- a/tinytroupe/tools.py +++ b/tinytroupe/tools.py @@ -10,7 +10,7 @@ import tinytroupe.utils as utils from tinytroupe.extraction import ArtifactExporter -from tinytroupe.enrichment import Enricher +from tinytroupe.enrichment import TinyEnricher from tinytroupe.utils import JsonSerializableRegistry diff --git a/tinytroupe/utils.py b/tinytroupe/utils.py index 4e6651d..8eb41bc 100644 --- a/tinytroupe/utils.py +++ b/tinytroupe/utils.py @@ -255,28 +255,41 @@ def read_config_file(use_cache=True, verbose=True) -> configparser.ConfigParser: else: config = configparser.ConfigParser() - # first, try the directory of the current main program - config_file_path = Path.cwd() / "config.ini" + # Read the default values in the module directory. + config_file_path = Path(__file__).parent.absolute() / 'config.ini' + print(f"Looking for default config on: {config_file_path}") if verbose else None if config_file_path.exists(): - config.read(config_file_path) _config = config - return config else: - if verbose: - print(f"Failed to find custom config on: {config_file_path}") - print("Now switching to default config file...") + raise ValueError(f"Failed to find default config on: {config_file_path}") - # if nothing there, use the default one in the module directory - config_file_path = Path(__file__).parent.absolute() / 'config.ini' - print(f"Looking for config on: {config_file_path}") if verbose else None + # Now, let's override any specific default value, if there's a custom .ini config. + # Try the directory of the current main program + config_file_path = Path.cwd() / "config.ini" if config_file_path.exists(): - config.read(config_file_path) + print(f"Found custom config on: {config_file_path}") if verbose else None + config.read(config_file_path) # this only overrides the values that are present in the custom config _config = config return config else: - raise ValueError("Could not find config.ini file anywhere") - + if verbose: + print(f"Failed to find custom config on: {config_file_path}") if verbose else None + print("Will use only default values. IF THINGS FAIL, TRY CUSTOMIZING MODEL, API TYPE, etc.") if verbose else None + + return config + +def pretty_print_config(config): + print() + print("=================================") + print("Current TinyTroupe configuration ") + print("=================================") + for section in config.sections(): + print(f"[{section}]") + for key, value in config.items(section): + print(f"{key} = {value}") + print() + def start_logger(config: configparser.ConfigParser): # create logger logger = logging.getLogger("tinytroupe") diff --git a/tinytroupe/personchecker.py b/tinytroupe/validation.py similarity index 99% rename from tinytroupe/personchecker.py rename to tinytroupe/validation.py index fddab03..58e9525 100644 --- a/tinytroupe/personchecker.py +++ b/tinytroupe/validation.py @@ -12,7 +12,7 @@ default_max_content_display_length = config["OpenAI"].getint("MAX_CONTENT_DISPLAY_LENGTH", 1024) -class TinyPersonChecker: +class TinyPersonValidator: @staticmethod def validate_person(person, expectations=None, include_agent_spec=True, max_content_length=default_max_content_display_length):