Mobile Architecture
Runtime Architecture
graph TD
subgraph Flutter["Flutter Layer (Dart)"]
Main["main.dart\n- bootstraps metrics\n- routes"]
UI["Screens\nSplash / Home / Help / Start / Chat / Metrics"]
BridgeClient["LlmPlatformChannel\n(MethodChannel client)"]
Metrics["MetricsRecorder\n(JSON + CSV export)"]
end
subgraph Bridge["Flutter Platform Bridge"]
MC["MethodChannel: llm_inference"]
end
subgraph Android["Android Native Layer (Kotlin)"]
Activity["MainActivity\nMethodCallHandler"]
Download["Model downloader\n(URL -> filesDir)"]
Inference["InferenceModel\nMediaPipe session"]
Prefs["SharedPreferences\nllm_prefs.model_path"]
end
subgraph Storage["Local Storage"]
ModelFile["App filesDir\nqwen.task"]
MetricsJson["app-docs/llm_metrics.json"]
MetricsCsv["Downloads/llm_metrics.csv"]
end
UI --> BridgeClient
BridgeClient --> MC
MC --> Activity
Activity --> Download
Activity --> Inference
Download --> ModelFile
Download --> Prefs
BridgeClient --> Metrics
Metrics --> MetricsJson
Metrics --> MetricsCsv
Inference --> ModelFile
Key Components
1. App Entry and Navigation
main.dart initializes MetricsRecorder before rendering.
- Named routes:
/ -> splash
/home -> feature selection
/help -> assistance form
/resqbot -> model bootstrap/chat entry
2. Flutter-to-Android Bridge
LlmPlatformChannel wraps the llm_inference method channel.
- Exposed calls:
downloadModel
isModelDownloaded
generateResponse
- stream-like progress via method callbacks (
downloadProgress).
3. Native Model Runtime
MainActivity handles all bridge methods.
- Downloads model file from Hugging Face URL into app-private
filesDir.
- Persists model path in
SharedPreferences (llm_prefs) for reuse on app restart.
- Uses
InferenceModel singleton for MediaPipe LlmInference session creation and generation.
4. Metrics Pipeline
- Each
generateResponse() call (Dart side) captures:
- latency,
- memory before/after (RSS),
- prompt/response lengths,
- response rate indicators.
- Metrics are persisted to local JSON and can be exported to CSV.
Model Lifecycle State Diagram
stateDiagram-v2
[*] --> AppStarted
AppStarted --> PathRestore: read llm_prefs
PathRestore --> ModelReady: saved path exists + file exists
PathRestore --> ModelMissing: no saved model
ModelMissing --> Downloading: downloadModel()
Downloading --> ModelReady: success
Downloading --> DownloadFailed: network/file error
DownloadFailed --> ModelMissing
ModelReady --> SessionInit: first generateResponse()
SessionInit --> InferenceRunning: session created
InferenceRunning --> ModelReady: response returned
Code Map
mobile/
lib/
main.dart
screens/
help_request_screen.dart
start_screen.dart
chat_screen.dart
metrics_screen.dart
services/
llm_platform_channel.dart
metrics_recorder.dart
android/app/src/main/kotlin/com/example/resqconnect_edge_app/
MainActivity.kt
InferenceModel.kt
Model.kt
llminference/ # experimental/native prototype code not wired into Flutter build path