Skip to content
Last updated

Menu Synchronization

Menu data in Tote changes frequently -- store managers add items, update prices, mark items out of stock, and adjust availability windows throughout the day. Your integration needs a strategy to keep its local menu data current without hammering the API.

This guide covers three approaches to menu synchronization, from simplest to most efficient, along with caching and multi-location strategies.

Before diving into sync strategies, here is what the menu endpoint returns:

curl https://sandbox.api.tote.ai/v1/online-ordering/locations/{location_id}/menu \
  -H "Authorization: Bearer YOUR_ACCESS_TOKEN"

Key fields for synchronization:

FieldDescription
last_modifiedISO 8601 timestamp of the most recent menu change.
version_hashOpaque hash representing the current menu version. Changes when any menu data changes.
categoriesFull menu tree: categories, items, modifier groups, and modifiers.

The version_hash is the foundation of every sync strategy below.

Approach 1: Full Menu Pull (Simplest)

Pull the entire menu on a fixed interval. This is the easiest approach to implement and appropriate for integrations with a small number of locations.

How it works:

  1. Call GET /locations/{location_id}/menu on a fixed interval.
  2. Replace your cached menu with the response.
# Pull the full menu every 15 minutes
curl https://sandbox.api.tote.ai/v1/online-ordering/locations/a1b2c3d4-e5f6-7890-abcd-ef1234567890/menu \
  -H "Authorization: Bearer YOUR_ACCESS_TOKEN"

Recommended interval: Every 15 minutes.

Pros:

  • Dead simple to implement.
  • No additional endpoints or logic required.
  • Guaranteed fresh data on every pull.

Cons:

  • Downloads the full menu payload (~100KB+) every time, even when nothing changed.
  • Wasteful for locations with infrequent menu changes.
  • Does not scale well beyond ~20 locations (rate limits become a concern).

When to use: Prototyping, small integrations (fewer than 20 locations), or when simplicity is more important than efficiency.

Poll the lightweight metadata endpoint to detect changes, then download the full menu only when the version_hash has changed. This is the recommended approach for most integrations.

How it works:

  1. Fetch and cache the full menu (Approach 1, once).
  2. Periodically call GET /locations/{location_id}/menu/metadata to get the current version_hash.
  3. Compare the returned version_hash with your cached version.
  4. If they differ, re-fetch the full menu.
# Step 1: Initial full menu pull (cache the version_hash)
curl https://sandbox.api.tote.ai/v1/online-ordering/locations/a1b2c3d4-e5f6-7890-abcd-ef1234567890/menu \
  -H "Authorization: Bearer YOUR_ACCESS_TOKEN"
# Response includes: "version_hash": "sha256:a1b2c3d4e5f6"

# Step 2: Poll metadata (lightweight, ~200 bytes)
curl https://sandbox.api.tote.ai/v1/online-ordering/locations/a1b2c3d4-e5f6-7890-abcd-ef1234567890/menu/metadata \
  -H "Authorization: Bearer YOUR_ACCESS_TOKEN"

Metadata response:

{
  "location_id": "a1b2c3d4-e5f6-7890-abcd-ef1234567890",
  "last_modified": "2026-01-15T14:30:00Z",
  "version_hash": "sha256:a1b2c3d4e5f6"
}

Pseudocode:

import time

POLL_INTERVAL = 300  # 5 minutes during business hours

class MenuSync:
    def __init__(self, api_client, location_id):
        self.api = api_client
        self.location_id = location_id
        self.cached_menu = None
        self.cached_hash = None

    def initial_load(self):
        """Fetch the full menu and cache it."""
        menu = self.api.get_menu(self.location_id)
        self.cached_menu = menu
        self.cached_hash = menu["version_hash"]

    def poll_and_sync(self):
        """Check for changes and re-fetch if needed."""
        metadata = self.api.get_menu_metadata(self.location_id)

        if metadata["version_hash"] != self.cached_hash:
            # Menu changed -- re-fetch the full menu
            menu = self.api.get_menu(self.location_id)
            self.cached_menu = menu
            self.cached_hash = menu["version_hash"]
            return True  # Menu was updated
        return False  # No change

    def run(self):
        """Main sync loop."""
        self.initial_load()
        while True:
            self.poll_and_sync()
            time.sleep(POLL_INTERVAL)

Recommended polling intervals:

PeriodIntervalReason
During business hoursEvery 5 minutesMenus change most frequently when the store is open.
Outside business hoursEvery 30 minutesChanges are rare; reduce API load.

Pros:

  • Metadata responses are tiny (~200 bytes vs ~100KB+ for full menus).
  • Only downloads the full menu when something actually changed.
  • Scales to hundreds of locations.

Cons:

  • Slightly more complex than Approach 1.
  • Up to 5-minute delay between a menu change and your integration seeing it.

When to use: Production integrations with any number of locations. This is the recommended default approach.

Approach 3: Webhook-Driven (Lowest Latency)

Subscribe to menu.changed webhook events and re-fetch the menu only when notified. This approach provides near-real-time menu updates with minimal API polling.

Note: Webhook subscriptions are documented in Phase 4. This section provides a forward reference so you can plan your architecture.

How it works:

  1. Register a webhook subscription for the menu.changed event type.
  2. When a menu changes, Tote sends a webhook to your registered URL.
  3. On receiving the webhook, fetch the full menu for the affected location.
  4. Use metadata polling (Approach 2) as a fallback to catch any missed webhooks.

Expected webhook payload:

{
  "event_type": "menu.changed",
  "location_id": "a1b2c3d4-e5f6-7890-abcd-ef1234567890",
  "version_hash": "sha256:x9y8z7w6v5u4",
  "timestamp": "2026-01-15T14:30:00Z"
}

Architecture:

Tote API --webhook--> Your Server --> Re-fetch menu
                                  --> Update local cache

Fallback: metadata polling every 30 minutes (catches missed webhooks)

Pros:

  • Near-real-time updates (seconds, not minutes).
  • Minimal API calls -- only fetches menus when they actually change.

Cons:

  • More complex to implement (webhook endpoint, signature verification, retry handling).
  • Requires a publicly accessible HTTPS endpoint to receive webhooks.
  • Must implement Approach 2 as fallback for reliability.

When to use: Integrations that need sub-minute menu freshness (e.g., customer-facing ordering apps with high traffic).

Caching Strategy

Regardless of which sync approach you use, follow these caching principles:

Cache per location

Each location has its own menu. Cache menus keyed by location_id:

cache_key = f"menu:{location_id}"

Use version_hash for invalidation

Never use time-based cache expiry alone. Always compare version_hash to know whether your cached data is current:

def is_cache_fresh(location_id, cached_hash):
    metadata = api.get_menu_metadata(location_id)
    return metadata["version_hash"] == cached_hash

Set TTL matching your poll interval

If using metadata polling, set your cache TTL to match the poll interval as a safety net:

ApproachCache TTL
Full pull (Approach 1)15 minutes
Metadata polling (Approach 2)5 minutes (business hours), 30 minutes (off-hours)
Webhook-driven (Approach 3)No time-based TTL; invalidate on webhook

Handle cache misses gracefully

If your cache is empty (cold start, eviction, crash recovery), fetch the full menu before serving any customer requests. Never serve stale or empty menu data.

Multi-Location Sync

When syncing menus for many locations, stagger your polling to avoid hitting rate limits.

Stagger polling intervals

Distribute poll times evenly across your poll interval:

import time

locations = ["loc-001", "loc-002", "loc-003", ..., "loc-100"]
poll_interval = 300  # 5 minutes = 300 seconds
delay_per_location = poll_interval / len(locations)  # 3 seconds

for location_id in locations:
    sync_menu_metadata(location_id)
    time.sleep(delay_per_location)

Example calculation:

  • 100 locations, 5-minute poll interval
  • Delay between locations: 300s / 100 = 3 seconds
  • All locations polled within one interval
  • Each location polled once every 5 minutes

Prioritize active locations

Poll locations with online_ordering_enabled: true more frequently than disabled locations. Disabled locations do not need menu sync until they are re-enabled.

Use parallel requests carefully

If using parallel requests to speed up syncing, limit concurrency to avoid rate limiting:

MAX_CONCURRENT = 10  # Stay well under rate limits

async def sync_all(locations):
    semaphore = asyncio.Semaphore(MAX_CONCURRENT)
    tasks = [sync_with_limit(semaphore, loc) for loc in locations]
    await asyncio.gather(*tasks)

Choosing an Approach

FactorApproach 1Approach 2Approach 3
ComplexityLowMediumHigh
Freshness15 min5 minSeconds
BandwidthHighLowLowest
Locations< 20AnyAny
Best forPrototypingProduction (default)Real-time apps

Start with Approach 2 (metadata polling) for production integrations. Add webhook support (Approach 3) later if you need sub-minute freshness.

Next Steps