-
Notifications
You must be signed in to change notification settings - Fork 5
/
latest_version.txt
128 lines (106 loc) · 12.3 KB
/
latest_version.txt
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
MAIN INFO
Ambrosinus-Toolkit latest version is: 1.3.1
-
CHANGELOG
v1.3.1
-1 I have updated ATK to version 1.3.1 to fix an exception that was causing a "loop execution".
v1.3.0
-1 I have updated ATk with the latest Stable Diffusion Annotators, Preprocess model and Samplers;
-2 Integrated the ControlNET mode to give priority to prompt/CN/Balanced;
-3 Improved the metadata info text output embedded into generated images. Now it shows WebUI and ControlNET (A1111) versions;
-4 Added the "ATkImagePreview" component into the Image subcategory. This component displays images on the Grasshopper canvas and can be scaled by a factor;
v1.2.9
-1 I have updated some minor bugs encountered when running "canny" mode through the "AItxtToimg" component;
-2 Added auto-populate feature to all value list components involved in the grasshopper definition and AI Stable Diffusion components. Now ControlNET lists (samplers, resizing features and checkpoints annotators) will be updated automatically simply by connecting the default value list Grasshopper component;
-3 Added the "ValueListPopulator" component into the Manage subcategory. Through this component, you can set a custom value list by passing your keys and items. It is possible to add quotes as well;
-4 Added a funny "ATk Support BuyMeACoffee" component into the About subcategory just to share a moment of virtual friendship (if you wish);
-5 As always Stay tuned!
v1.2.8
-1 Added new component "AIoutpaint". Now You can explore everything outside of the image frame;
v1.2.7
-1 This version (v1.2.7) has been updated to the Grasshopper core 7.35.23346.11001 so user can get the toolkit inside Rhino v7 and Rhino v8 (all C# components);
-2 Minor bugs fixed;
-3 Added "SetCanvas" component (light version). You can customize partially your canvas and components;
v1.2.6
-1 Added AI variation mode locally into the AI subcategory. Now You can run Stable Diffusion with this feature you can alter the images through the "Denoise" parameter and the use of the editing mask;
-2 Added Text-to-3D, Image-to-3D and text-to-texture from mesh.ai service into the AI subcategory. The "MeshyReq" generates the request while "MeshyRet" gets all files elaborate through the API key (WIP in the next few days, I will upload the "ghuser" objects to the GitHub page;
-3 Added the "B64toBMP" component into the Image subcategory. Through it is possible to convert an image to Base64 string (text) and this latter into a Bitmap. It is useful to embed an image into a gh file without attaching the bitmap file;
-4 Added the "Alkemyc Kaballah" component into the Extra subcategory. It is a funny experiment in numerology according to the Kaballah tradition - "What's in the name?";
v1.2.5
-1 I have fixed different bugs;
-2 Updated "GradientGen" and "GradientDecod" to generate multiple Gradient components and decode a multiple list of RGB values;
-3 Added "SaveGradientLib" component into the Manage subcategory. Through it, you can save different Grasshopper gradients in a custom library format (grasshopper_gradients.xml file);
-4 Added "SwitchGradientLib" component into the Manage subcategory. Now it is easy to switch among different gradient libraries without rebooting Grasshopper;
-5 Added "WireX" component into the Manage subcategory. Say goodbye to tedious connect/disconnect action with multiple components. Now you can control individual input/output parameters when connecting and disconnecting components;
v1.2.4
-1 I have fixed different bugs that come up after some A1111 API updates. The issue generally appeared with this error message: "Index was outside the bounds of the array". it was related to the info text embedded in the generated image.
-2 Added an internal timeout in the "SDopts_loc" component (that one to select different checkpoint models);
-3 Fixed an issue related to the possibility of generating more than one image (N params);
-4 Fixed different issues about the INFOtext generation if the user switches among T2I Base, CN v1.0 and CN v1.1.X processes;
-5 Fixed the possibility to assign CFG decimal values (e.g.:5.5, 5.8, etc.);
-6 Reduced the number of times that the CN v1.1.X alert message turns on;
v1.2.3
-1 I have fixed a bug that comes up after some A1111 API updates. The issue generally appeared with this error message: "Index was outside the bounds of the array". it was related to the info text embedded in the generated image.
v1.2.2
-1 "LaunchSD_loc" has been updated: now the input parameters give the progression sense of the sequence of the steps. Now you can check the IP address through the integrated ipconfig command helpful when the user ticks the listen argument;
-2 "CustIPport_loc" component has been added: this component helps the user in setting the Remote PC configuration passing the IP address shared on his network and the port number opened on the Windows OS Firewall;
-3 "AIeNG_loc" component: its new look added the implementation of the ControlNET version 1.1.X (bear in mind it is implemented, but not yet fully API supported so I suggest using v1.0) so you can test it but it is already known that the dataset-models do not work properly in the API mode;
-4 Thanks to WinSDlauncher (the Windows OS app) is it possible to set the Server PC without switching/fetching the Rhino license between the Server PC and the Remote one (please see the video demo);
v1.2.1
-1 "LaunchSD_loc" has been improved, now it is possible to interact with webui-user.bat file and some of its most used arguments. It can create webui-user.bat file according to cmd arguments;
-2 "ViewCapture" component has been added to the 2.Image sub-category. This component allows the user to save the Rhino viewport as an image file (JPG/PNG) with a scale factor and as a Rhino Named Views object. Now you can pass easily your 3D model screenshot as BaseIMH input in the AIeng_loc component to render your concept design;
-3 "UpsclAI_loc" component has been added to the 3.AI sub-category. This upscaler can improve the output size of any image format leveraging AI models working with the A1111 project;
-4 "OpenDir" component has been added to the 4. Manage sub-category. It is still in WIP mode but basically, it can open quickly any full folder path passed as input;
-5 "AIeng_loc" component has been updated. Currently, it works fine with ControlNET v1.0 extension (see the video tutorial on how to install it) but it is ready for ControlNET v1.1, more info in the next update;
-6 "Extra components" have been re-ordered in the 9.Extra sub-category. Please, delete old ones (to avoid an overlapped effect) and download new versions from the GitHub page;
v1.2.0
-1 "SDopts_loc" has been added to the AI sub-category. This component allows the user to set a custom Stable Diffusion model checkpoint, for instance, using the “mdjrny-v4.safetensors” model, a dataset trained with Midjourney version 4 images;
-2 "SD-Imginfo" allows the user to read all AI-Gen settings used for image generation;
-3 All C# components have been updated with a new right-click context menu to get more info;
-4 AI subcategory components disposition has been reordered;
v1.1.9
-1 "AIeNG_loc" has been added to the AI sub-category. This component lays the foundations for future adaptations, expansions and tools to make the most of the best features provided by the AUTOMATIC1111 project - It is part of the "_loc" components able to run Stable Diffusion and ControlNET neural network locally;
-2 "LaunchSD_loc" has been added to the AI sub-category. This component can open and close the localhost port and also check the status - It is part of the "_loc" components able to run Stable Diffusion and ControlNET neural network locally (thanks to the Automatic1111 project);
-3 "LA_DPTto3D_b101" has been added to the AI sub-category (downloadable from the GitHub page). This component can generate a PointCloud (stored in the PLY file format) directly from a 2D RGB image. It exploits the DPT technology (transformers libraries). Right-click on the component for more info;
v1.1.8
-1 "FileNamer" component has been fixed; these updates (v1.1.7/v1.1.8) were necessary due to a couple of incoming research projects;
-2 "La_GrayGaussMask" has been updated. Some minor fixes;
-3 "LA_OpenAI-GH_Ask_b107" has been updated. Some minor fixes;
-4 "LA_OpenAI-GHadv_b112" has been updated. Some minor fixes;
-5 "LA_StabilityAI-GH_b108" has been updated. Some minor fixes;
v1.1.7
-1 "SeeOut" component has been fixed to avoid a loop-action if its output is passed as input in OpenAI or StabilityAI component. Now it is not necessary to rename the Slider Number with the name "SeqID";
v1.1.6
-1 Ambrosinus-Toolkit updated with the "SeeOut" component in the Image sub-category. It can slide the image file path Output by a "SeqID" slider or just see the latest file path generated from OpenAI or StabilityAI GH component;
-2 Minor fix for LA_StabilityAI-GH component, now at Build-107, see the GitHub page to download it. If you already downloaded this build on 2023/02/06, please do it again.
-3 Minor fix for LA_OpenAI-GH component, now at Build-111; see the GitHub page to download it;
-4 Minor fix for LA_OpenAI_Ask-GH component, now at Build-106; see the GitHub page to download it;
v1.1.5
-1 Ambrosinus-Toolkit updated with "FileNamer" component. It can generate a filename to avoid overwriting action;
-2 New version of the "SdINinfo" component. Now able to show different info from StabilityAI image output like BaseIMG and MaskIMG info/links;
-3 "LA_StabilityAI-GH" now is advanced, which means that is possible to run: TXT2IMG, IMG2IMG and IMG2IMG Masking features. I added CLIP guidance too in order to drive the image generation process;
-4 All images will be stored in IMGs folder. All metadata will be stored in TXTs folder. Each run will be tracked in a unique CSV file. Helpful for data storytelling;
v1.1.4
-1 "LA_StabilityAI-GH" build-102 has been updated. After the StabilityAI updates of the "stability-sdk" Python library to v0.3.0, this component you can now select different Engine (Stable Diffusion v1, v1.5, v2.0 and v2.1) and among 10 different samplers; I have modified the filename layout for integrating "SD-INinfo" actions;
-2 "SD-INinfo" component has been developed and added. This component allows the user to grab settings info from the filename of the image generated by LA_StabilityAI-GH component;
-3 fixed minor bugs and inaccuracies in descriptions and names;
v1.1.3
-1 "AnsToPrompt" has been added. This component converts the AskToOpenAI answer into a text prompt;
-2 "AskToOpenAI" component has been developed and added (see GitHub page to install it) to play with OpenAI completion mode, the OpenAI GPT-3 model transforms this component into a super smart "chatBot". I asked for design tips but also for generating a simple piece of code (Python, C# etc.);
v1.1.2
-1 Subcategory Layout has changed;
-1 "DALLEfromGH" UI updated with "Light version" info;
-2 "LA_OpenAI_GHadv" has replaced the CPython "Light version". Now you can experiment with the EDIT and VARIATION modes. You can install it via the GitHub repo;
-3 Subcategory Image has been added;
-4 "ImageConv" can read image file info and can convert images to these formats: Bmp, Emf, Exif, Gif, Icon, Jpeg, MemoryBmp, Png, Tiff, Wmf;
-5 "ImageMask" can generate PNG and JPG image masks by simply drawing them inside Rhino over the BaseIMG;
v1.1.1
-1 "DALLEfromGH" has been added to the toolkit. In order to run it the Toolkit uses the Json DLL library provided in the installation ZIP file. Now you can explore OpenAI (DALL-E) prompt-to-image requesting process directly inside GH. No Python libraries need to be installed to run OpenAI. It has replaced the LA_OpenAI-GH.ghuser component (anyway it will always be available on GitHub and you can always install it by following the guide.
v1.1.0
-1 "GradientGen" has been replaced by Ambrosinus-Toolkit project;
-2 "ToolkitVersion" component - Now the toolkit can show the version and the main changelog (main updates) and above all notify user if he needs to install the latest version;
-3 "HEXtoRGB" now can be covert also from RGB to HEX values with lowercase and hashtag options;
-4 "KelvinToRGB" converts a Kelvin temperature to RGB value (for specific values shows some extra info about devices that emit the same temperature);
-5 "WavelengthToRGB" converts a Wavelength in a visible light range (380nm-780nm) to RGB value;
v1.0.0
-1 "GradientGen and Utilities", an internal module part of Ambrosinus-Toolkit project;