English | 中文
Fluentic is a Flutter + Agentic framework designed to make Large Language Models (LLMs) feel like callable functions within your Dart/Flutter applications. Inspired by Magentic, Fluentic allows you to define a prompt and an expected Dart return type (like String
, int
, or your own custom classes), and then call the LLM as if it were a simple asynchronous function.
The "LLM as a Function" Philosophy: Imagine you need an LLM to perform a calculation or generate structured data like a user profile. With Fluentic, you define this interaction much like a regular function signature:
Future<int> addNumbers(int a, int b)
Future<Superhero> generateHeroProfile(String power)
Fluentic bridges the gap, handling the LLM communication and data parsing for you.
- Intuitive LLM Calls: Interact with LLMs using familiar Dart async function patterns.
- Typed Outputs: Get back strongly-typed Dart objects (
String
,int
, custom classes), not just raw text. - Custom Object Mapping: Easily map LLM responses to your Dart classes.
- Flexible Prompting: Use templates with placeholders for dynamic inputs.
- Provider Agnostic: Configure to work with OpenAI, DashScope, or any OpenAI-compatible API.
Add to your pubspec.yaml
:
dependencies:
fluentic: ^0.0.1
Then flutter pub get
.
Set up your LLM API credentials and preferences once.
// main.dart
import 'package:fluentic/fluentic_config.dart';
void main() {
FluenticConfig.global = FluenticConfig(
apiKey: "YOUR_API_KEY", // e.g., sk-xxxx
baseUrl: "https://api.openai.com/v1", // Or your provider's endpoint
model: "gpt-3.5-turbo",
);
// ... (Proceed to model registration and runApp)
}
Let's create a "function" that asks the LLM to sum two numbers and return an int
.
// somewhere_in_your_app.dart
import 'package:fluentic/fluentic.dart';
// 1. Define the Fluentic "function"
// R (return type) is int.
// The prompt template uses {a} and {b} as placeholders.
final calculateSum = Fluentic<int>(
promptTemplate: "What is the sum of {a} and {b}? Respond with ONLY the resulting number.",
);
// 2. Call it like an async Dart function
Future<void> performCalculation() async {
try {
int sum = await calculateSum({'a': 125, 'b': 75});
print("LLM calculated sum: $sum"); // Expected: LLM calculated sum: 200
} catch (e) {
print("Error during calculation: $e");
}
}
No special registration is needed for basic types like int
, String
, double
, bool
, or lists/maps of these types.
Now, let's have the LLM generate data for a custom Superhero
object.
A. Define your Superhero
class:
// models/superhero.dart
import 'dart:convert';
class Superhero {
final String name;
final int age;
final String power;
final List<String> enemies;
Superhero({
required this.name,
required this.age,
required this.power,
required this.enemies,
});
static Superhero fromJson(Map<String, dynamic> json) {
return Superhero(
name: json['name'] as String,
age: json['age'] as int,
power: json['power'] as String,
enemies: List<String>.from(json['enemies']),
); // Assuming LLM might use 'enemies_list'
}
Map<String, dynamic> toJson() {
return {
'name': name,
'age': age,
'power': power,
'enemies': enemies,
}; // Example of different keys
}
//Not necessary, but useful for debugging
@override
String toString() => 'Superhero(${jsonEncode(toJson())})';
}
B. Register your Superhero
class with Fluentic:
This crucial step tells Fluentic how to create a Superhero
instance from an LLM's JSON response and provides an example for structuring prompts. Do this in your main.dart
before runApp()
.
// main.dart
// ... (imports: fluentic_config.dart, fluentic_types.dart, superhero.dart)
void main() {
// FluenticConfig.global setup (as in Step 1)
FluenticConfig.global = FluenticConfig(apiKey: "YOUR_API_KEY", ...);
// --- Register Superhero Model ---
registerFluenticModel<Superhero>(
fromJson: Superhero.fromJson, // The factory to create a Superhero
// Provide a real example instance, created on the fly.
// This helps Fluentic understand the expected output structure.
exampleInstance:Superhero(
name: "OnTheFly Man",
age: 1,
power: "Constructor Activation",
enemies: ["Boilerplate Code"],
);
)
runApp(const MyApp());
}
C. Create and use the Fluentic<Superhero>
function:
// somewhere_in_your_app.dart
import 'package:fluentic/fluentic_core.dart';
import 'superhero.dart'; // Your Superhero model
// 1. Define the Fluentic "function" for Superheroes
final generateSuperhero = Fluentic<Superhero>(
promptTemplate: "Generate a superhero named {name}."
);
// 2. Call it
Future<void> createNewHero() async {
try {
Superhero hero = await generateSuperhero({'name': "Homelander"});
print(hero);
//Generated Superhero: Superhero({"name":"Homelander","age":35,"power":"Superhuman strength and durability","enemies":["Adversarios poderosos"]})
} catch (e) {
print("Error generating superhero: $e");
}
}
When you want Fluentic to return an instance of your custom class (like Superhero
), you must explicitly register it using registerFluenticModel<YourClass>()
. This involves:
fromJson
: Providing the static factory method (e.g.,Superhero.fromJson
) that Fluentic will use to convert the LLM's JSON output into an instance of your class.exampleInstance
: Supplying a valid, concrete example instance of your class (e.g.,Superhero(name: "Demo", ...)
). This instance serves multiple purposes:- It helps Fluentic generate more effective system prompts to guide the LLM on the desired JSON output structure.
- It "activates" your class file during the registration process, ensuring Dart's compiler and runtime are aware of it, which is crucial for reliable operation and avoids issues related to tree-shaking for unreferenced code.
This explicit, one-time registration in your application's setup (e.g., main()
) ensures robust and predictable behavior when working with custom objects.
You can pass data into your prompt templates:
- Named Arguments (Map):
await fluenticInstance({'placeholderName': value})
- Positional Arguments (List): Arguments fill placeholders in order.
await fluenticInstance([value1, value2])
- No Arguments: If no placeholders, call with
null
or an empty map/list.await fluenticInstance(null)
LLM interactions can be unpredictable. Always use try-catch
blocks for your Fluentic
calls to handle potential network errors, API issues, or parsing failures if the LLM response is not as expected.
- Explore different prompt engineering techniques.
- Integrate more complex custom objects.
- Manage configurations for different LLM providers.
Contributions are welcome! Please open an issue or submit a pull request.