Skip to content

Commit 64595aa

Browse files
author
Fatih Aydın
committed
refactor: Update README.md
1 parent 5022aca commit 64595aa

File tree

1 file changed

+25
-25
lines changed

1 file changed

+25
-25
lines changed

README.md

Lines changed: 25 additions & 25 deletions
Original file line numberDiff line numberDiff line change
@@ -81,7 +81,7 @@ To use the Gemini API, you'll need an API key. If you don't already have one, cr
8181
Interact with Gemini's API:
8282

8383
```php
84-
$result = $container->get('openai')->geminiPro()->generateContent('Hello');
84+
$result = $container->get('gemini')->geminiPro()->generateContent('Hello');
8585

8686
$result->text(); // Hello! How can I assist you today?
8787
```
@@ -92,7 +92,7 @@ $result->text(); // Hello! How can I assist you today?
9292
Generate a response from the model given an input message. If the input contains only text, use the `gemini-pro` model.
9393

9494
```php
95-
$result = $container->get('openai')->geminiPro()->generateContent('Hello');
95+
$result = $container->get('gemini')->geminiPro()->generateContent('Hello');
9696

9797
$result->text(); // Hello! How can I assist you today?
9898
```
@@ -102,7 +102,7 @@ If the input contains both text and image, use the `gemini-pro-vision` model.
102102

103103
```php
104104

105-
$result = $container->get('openai')->geminiProVision()
105+
$result = $container->get('gemini')->geminiProVision()
106106
->generateContent([
107107
'What is this picture?',
108108
new Blob(
@@ -119,7 +119,7 @@ $result->text(); // The picture shows a table with a white tablecloth. On the t
119119
Using Gemini, you can build freeform conversations across multiple turns.
120120

121121
```php
122-
$chat = $container->get('openai')->chat()
122+
$chat = $container->get('gemini')->chat()
123123
->startChat(history: [
124124
Content::parse(part: 'The stories you write about what I have to say should be one line. Is that clear?'),
125125
Content::parse(part: 'Yes, I understand. The stories I write about your input should be one line long.', role: Role::MODEL)
@@ -138,7 +138,7 @@ echo $response->text(); // In the heart of England's lush countryside, amidst em
138138
By default, the model returns a response after completing the entire generation process. You can achieve faster interactions by not waiting for the entire result, and instead use streaming to handle partial results.
139139

140140
```php
141-
$stream = $container->get('openai')->geminiPro()
141+
$stream = $container->get('gemini')->geminiPro()
142142
->streamGenerateContent('Write long a story about a magic backpack.');
143143

144144
foreach ($stream as $response) {
@@ -150,7 +150,7 @@ foreach ($stream as $response) {
150150
When using long prompts, it might be useful to count tokens before sending any content to the model.
151151

152152
```php
153-
$response = $container->get('openai')->geminiPro()
153+
$response = $container->get('gemini')->geminiPro()
154154
->countTokens('Write a story about a magic backpack.');
155155

156156
echo $response->totalTokens; // 9
@@ -188,7 +188,7 @@ $generationConfig = new GenerationConfig(
188188
topK: 10
189189
);
190190

191-
$generativeModel = $container->get('openai')->geminiPro()
191+
$generativeModel = $container->get('gemini')->geminiPro()
192192
->withSafetySetting($safetySettingDangerousContent)
193193
->withSafetySetting($safetySettingHateSpeech)
194194
->withGenerationConfig($generationConfig)
@@ -201,7 +201,7 @@ Embedding is a technique used to represent information as a list of floating poi
201201
Use the `embedding-001` model with either `embedContents` or `batchEmbedContents`:
202202

203203
```php
204-
$response = $container->get('openai')->embeddingModel()
204+
$response = $container->get('gemini')->embeddingModel()
205205
->embedContent("Write a story about a magic backpack.");
206206

207207
print_r($response->embedding->values);
@@ -225,7 +225,7 @@ print_r($response->embedding->values);
225225
Use list models to see the available Gemini models:
226226

227227
```php
228-
$response = $container->get('openai')->models()->list();
228+
$response = $container->get('gemini')->models()->list();
229229

230230
$response->models;
231231
//[
@@ -260,7 +260,7 @@ $response->models;
260260
Get information about a model, such as version, display name, input token limit, etc.
261261
```php
262262

263-
$response = $container->get('openai')->models()->retrieve(ModelType::GEMINI_PRO);
263+
$response = $container->get('gemini')->models()->retrieve(ModelType::GEMINI_PRO);
264264

265265
$response->model;
266266
//Gemini\Data\Model Object
@@ -287,7 +287,7 @@ All responses are having a `fake()` method that allows you to easily create a re
287287
use Gemini\Testing\ClientFake;
288288
use Gemini\Responses\GenerativeModel\GenerateContentResponse;
289289

290-
$container->get('openai')->fake([
290+
$container->get('gemini')->fake([
291291
GenerateContentResponse::fake([
292292
'candidates' => [
293293
[
@@ -303,7 +303,7 @@ $container->get('openai')->fake([
303303
]),
304304
]);
305305

306-
$result = $container->get('openai')->geminiPro()->generateContent('test');
306+
$result = $container->get('gemini')->geminiPro()->generateContent('test');
307307

308308
expect($result->text())->toBe('success');
309309
```
@@ -314,11 +314,11 @@ In case of a streamed response you can optionally provide a resource holding the
314314
use Gemini\Testing\ClientFake;
315315
use Gemini\Responses\GenerativeModel\GenerateContentResponse;
316316

317-
$container->get('openai')->fake([
317+
$container->get('gemini')->fake([
318318
GenerateContentResponse::fakeStream(),
319319
]);
320320

321-
$result = $container->get('openai')->geminiPro()->streamGenerateContent('Hello');
321+
$result = $container->get('gemini')->geminiPro()->streamGenerateContent('Hello');
322322

323323
expect($response->getIterator()->current())
324324
->text()->toBe('In the bustling city of Aethelwood, where the cobblestone streets whispered');
@@ -328,43 +328,43 @@ After the requests have been sent there are various methods to ensure that the e
328328

329329
```php
330330
// assert list models request was sent
331-
$container->get('openai')->models()->assertSent(callback: function ($method) {
331+
$container->get('gemini')->models()->assertSent(callback: function ($method) {
332332
return $method === 'list';
333333
});
334334
// or
335-
$container->get('openai')->assertSent(resource: Models::class, callback: function ($method) {
335+
$container->get('gemini')->assertSent(resource: Models::class, callback: function ($method) {
336336
return $method === 'list';
337337
});
338338

339-
$container->get('openai')->geminiPro()->assertSent(function (string $method, array $parameters) {
339+
$container->get('gemini')->geminiPro()->assertSent(function (string $method, array $parameters) {
340340
return $method === 'generateContent' &&
341341
$parameters[0] === 'Hello';
342342
});
343343
// or
344-
$container->get('openai')->assertSent(resource: GenerativeModel::class, model: ModelType::GEMINI_PRO, callback: function (string $method, array $parameters) {
344+
$container->get('gemini')->assertSent(resource: GenerativeModel::class, model: ModelType::GEMINI_PRO, callback: function (string $method, array $parameters) {
345345
return $method === 'generateContent' &&
346346
$parameters[0] === 'Hello';
347347
});
348348

349349

350350
// assert 2 generative model requests were sent
351-
$container->get('openai')->assertSent(resource: GenerativeModel::class, model: ModelType::GEMINI_PRO, callback: 2);
351+
$container->get('gemini')->assertSent(resource: GenerativeModel::class, model: ModelType::GEMINI_PRO, callback: 2);
352352
// or
353-
$container->get('openai')->geminiPro()->assertSent(2);
353+
$container->get('gemini')->geminiPro()->assertSent(2);
354354

355355
// assert no generative model requests were sent
356-
$container->get('openai')->assertNotSent(resource: GenerativeModel::class, model: ModelType::GEMINI_PRO);
356+
$container->get('gemini')->assertNotSent(resource: GenerativeModel::class, model: ModelType::GEMINI_PRO);
357357
// or
358-
$container->get('openai')->geminiPro()->assertNotSent();
358+
$container->get('gemini')->geminiPro()->assertNotSent();
359359

360360
// assert no requests were sent
361-
$container->get('openai')->assertNothingSent();
361+
$container->get('gemini')->assertNothingSent();
362362
```
363363

364364
To write tests expecting the API request to fail you can provide a `Throwable` object as the response.
365365

366366
```php
367-
$container->get('openai')->fake([
367+
$container->get('gemini')->fake([
368368
new ErrorException([
369369
'message' => 'The model `gemini-basic` does not exist',
370370
'status' => 'INVALID_ARGUMENT',
@@ -373,5 +373,5 @@ $container->get('openai')->fake([
373373
]);
374374

375375
// the `ErrorException` will be thrown
376-
$container->get('openai')->geminiPro()->generateContent('test');
376+
$container->get('gemini')->geminiPro()->generateContent('test');
377377
```

0 commit comments

Comments
 (0)