Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I assume that "pretty fast" depends on the phone. My old Pixel 4a ran Gemma-3n-E2B-it-int4 without problems. Still, it took over 10 minutes to finish answering "What can you see?" when given an image from my recent photos.

Final stats:

15.9 seconds to first token

16.4 tokens/second prefill speed

0.33 tokens/second decode speed

662 seconds to complete the answer



I did the same thing on my Pixel Fold. Tried two different images with two different prompts: "What can you see?" and "Describe this image"

First image ('Describe', photo of my desk)

- 15.6 seconds to first token

- 2.6 tokens/second

- Total 180 seconds

Second image ('What can you see?', photo of a bowl of pasta)

- 10.3 seconds to first token

- 3.1 tokens/second

- Total 26 seconds

The Edge Gallery app defaults to CPU as the accelerator. Switched to GPU.

Pasta / what can you see:

- It actually takes a full 1-2 minutes to start printing tokens. But the stats say 4.2 seconds to first token...

- 5.8 tokens/second

- 12 seconds total

Desk / describe:

- The output is: while True: print("[toxicity=0]")

- Bugged? I stopped it after 80 seconds of output. 1st token after 4.1 seconds, then 5.7 tokens/second.


Pixel 4a release date = August 2020

Pixel Fold was in the Pixel 8 generation but uses the Tensor G2 from the 7s. Pixel 7 release date = October 2022

That's a 26 month difference, yet a full order of magnitude difference in token generation rate on the CPU. Who said Moore's Law is dead? ;)


As a another data point, on E4B, my Pixel 6 Pro (Tensor v1, Oct 2021) is getting about 4.4 t/s decode on a picture of a glass of milk, and over 6 t/s on text chat. It's amazing, I never dreamed I'd be viably running an 8 billion param model when I got it 4 years ago. And kudos to the Pixel team for including 12 GB of RAM when even today PC makers think they can get away with selling 8.


8 has G3 chip


Gemma-3n-E4B-it on my 2022 Galaxy Z Fold 4.

CPU:

7.37 seconds to first token

35.55 tokens/second prefill speed

7.09 tokens/second decode speed

27.97 seconds to complete the answer

GPU:

1.96 seconds to first token

133.40 tokens/second prefill speed

7.95 tokens/second decode speed

14.80 seconds to complete the answer


So a apparently the NPU can't be used for models like this. I wonder what it is even good for.


Pixel 9 Pro XL

("What can you see?"; photo of small monitor displaying stats in my home office)

1st token: 7.48s

Prefill speed: 35.02 tokens/s

Decode speed: 5.72 tokens/s

Latency: 86.88s

It did a pretty good job, the photo had lots of glare and was at a bad angle and a distance, with small text; it picked out weather, outdoor temperature, CO2/ppm, temp/C, pm2.5/ug/m^3 in the office; Misread "Homelab" as "Household" but got the UPS load and power correctly, Misread "Homelab" again (smaller text this time) as "Hereford" but got the power in W, and misread "Wed May 21" on the weather map as "World May 21".

Overall very good considering how poor the input image was.

Edit: E4B


In my case, it was pretty fast i would say, using S24 Fe, on Gemma3n E2B int 4, it took around 20 seconds to answer "Describe this image". And the result was pretty amazing.

Stats -

CPU -

first token - 4.52 sec

prefill speed - 57.50 sec tokens/s

decode speed - 10.59 tokens/s

Latency - 20.66 sec

GPU -

first token - 1.92 sec

prefill speed - 135.35 sec tokens/s

decode speed - 11.92 tokens/s

Latency - 9.98 sec


10min and 10% battery?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: