Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

best practice (+ maybe a feature request) #118

Open
Adminius opened this issue Feb 27, 2024 · 5 comments
Open

best practice (+ maybe a feature request) #118

Adminius opened this issue Feb 27, 2024 · 5 comments

Comments

@Adminius
Copy link

Hi, what is the best way to implement a logger?

For example while driving these fields are important:

  • Location
  • Heading
  • Battery level (SoC)
  • Rated range
  • Autopilot state
  • Gear
  • Speed
  • maybe also TPMS
  • Temperature inside/outside

But experience with FleeAPI shows, that already 4 fields with 10 seconds interval already exceeding API rate limits.
So, what is the best practice to do it?

Feature Request:
Client(=Logger) asks server for some fields (for example see above)
Server responds only that fields that was changed between last API calls from this client.
In this case Gear, Autopilot state, (speed), temperatures, battery level and some other doesn't change most of the time, so mostly only location and heading will be changed. that reduces traffic but the data quality stays the same.

@bassmaster187
Copy link

10 seconds of Spotify will exceed the traffic of such a logger settings with a 10 seconds interval for hours. So I can't understand why it is limited.

@rileymd88
Copy link

I am also curious how to handle these cases.

Most loggers today poll the https://developer.tesla.com/docs/fleet-api#vehicle_data API every 10-30 seconds to get all the data needed, however it is not possible to get a similar level of detail with fleet telemetry at the moment due to the limits in place.

@KarlF-KFLOE
Copy link

My understanding is that 10 fields per minute is a temporary constraint and subscription tiers offering more fields per minute will be available by the end of Q2 (fast approaching!) I'm hoping this is still the timeline (and that the cost is reasonable) as this is critical for our own use case!

@kiler129
Copy link

kiler129 commented May 14, 2024

Is the limit still in place in reality? The current Tesla Fleet API docs state the following:

image

The same section had the limit listed in 12/2023:
image

The section was tweaked in 01/2024:
image

Throughout January, February, and until somewhere in March the section listed the limit. In March the "Fleet Telemetry" section was revised to the currently-online abbreviated version.
I think @patrickdemers6 may be able to answer whether the limit is in place and maybe even why, but we probably need to wait for official Fleet API docs update to get info about planned limits (as it's more a business case).

Edit:
@Adminius FYI, your feature request is very unlikely. The current architectural concept is meant to support high-throughput stateless delivery of data. What you're suggesting is poll-based approach with persistence layer. This is something that doesn't scale.

@bassmaster187
Copy link

bassmaster187 commented May 14, 2024

Limits are still active:

{
	"response": null,
	"error": "Total data transfer rate is too high. Please reduce field count or increase interval_seconds for some fields.",
	"error_description": "",
	"txid": "xxxxx"
}

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

5 participants