Hey, good question! We deliver holistic wealth management as a single integrated experience.
Most retail platforms are vertically segmented, separate tools & products for investing, budgeting, savings, etc.
Users have to figure out which product to use and when. Or worse, they get cross-sold products they dont need. If you sign up for betterment or wealthfront right now you're left guessing where to start and whats appropriate. We step in where a family office or high networth adviser would, with a fiduciary duty to put your needs first as a SEC Registered Investment Adviser.
Our implementation: We analyze your complete financial picture (via Plaid/Oauth integration), then via reasoning models and deterministic logic surface the right feature at the right time based on what's actually blocking your wealth growth. If debt is your biggest issue, we guide you there first, not because you selected "debt management" but because that's you need.
Investment recommendations, budgeting features, debt paydown strategies, they all work together as one cohesive plan, not separate products you have to stitch together yourself.
Initial setup takes <10 mins (including time spent testing that traces get recorded), we have a `tusk init` setup wizard to walk you through creating a config.
Thanks! Good question. Tusk Drift isn't quite designed for these use cases.
Currently, Drift is language specific. You'd need the SDK installed in your backend while recording tests. This is because Drift captures not just the HTTP request/response pairs, but also all underlying dependency calls (DB queries, Redis operations, etc.) to properly mock them during replay.
A use case we do support is refactors within the same language. You'd record traces in your current implementation, refactor your code, then replay those traces to catch regressions.
For cross-language rewrites or browser-exported requests, you might want to look at tools that focus purely on HTTP-level recording/replay like Postman Collections. Hope this helps!
We instrument JWT libraries directly (jsonwebtoken, jwks-rsa). Both `jwt.sign()` and `jwt.verify()` are captured during recording and replayed with the original results. During replay, you get back the recorded verification result. So if the token was valid during recording, it stays valid during replay, even if it would be expired "now". The test runs in the temporal context of when it was recorded.
Also yes, appreciate you calling this out. The deviation classification after replay + automated RCA for unintended deviations is another differentiator. Let me know if you have feedback when you get time to explore.
We capture the actual DB queries, Redis cache hits, JWT generation, and not just the HTTP calls (like you would see with mitmproxy), which lets us replay the full request chain without needing a live database or cache. This way each test runs idempotently.
1. With our Cloud offering, Tusk Drift detects schema changes, then automatically re-records traces from new live traffic to replace the stale traces in the test suite. If using Drift purely locally though, you'd need to manually re-record traces for affected endpoints by hitting them in record mode to capture the updated behavior.
2. Our CLI tool includes built-in dynamic field rules that handle common non-deterministic values with standard UUID, timestamp, and date formats during response comparison. You can also configure custom matching rules in your `.tusk/config.yaml` to handle application-specific non-deterministic data.
3. Our classification workflow correlates deviations with your actual code changes in the PR/MR (including context from your PR/MR title and body). Classification is "fine-tuned" over time for each service based on past feedback on test results.
Agree. My biggest pain point with LLM code review tools is that they sometimes add 40 comments for a PR changing 100 lines of code. Gets noisy and hard to decipher what really matters.
Along the lines of verifiability, my take is that running a comprehensive suite of tests in CI/CD is going to be table stakes soon given that LLMs are only going to be contributing more and more code.