Google’s crawlers and fetchers don’t process unlimited content. By default, they only crawl the first 15MB of a file, and any content beyond that limit is ignored. Individual crawlers (such as Googlebot) may enforce smaller limits for specific file types. For example, an HTML page may effectively be limited to around 2MB, while a PDF might allow a larger size.
If critical content appears after the crawl limit, it may not be indexed, meaning it won’t appear in search results.
More information on Overview of Google crawlers and fetchers (user agents).