You added a bug reporting SDK to your app. You deployed it. You waited. Your dashboard is empty.
The instinct is to blame the tool: maybe the API token is wrong, maybe the SDK failed to initialize. But in most cases, the SDK is working fine. The real problem splits into two categories: discoverability and trust failures (most common) and technical misconfiguration (less common but faster to diagnose). This guide covers both, ranked by probability, so you can work top-to-bottom and stop as soon as you find your issue.
Symptoms: How to Confirm You Have This Problem
Before you start troubleshooting, confirm what “no reports” actually means:
- Zero reports in the dashboard despite confirmed active users. Check your analytics (DAU, session count, active installs). If users are opening the app, the feedback pipeline should be producing something.
- SDK initialized successfully but no reports created. The SDK is talking to the server, registering installs, but users aren’t completing the submission flow. This points to a discoverability or friction problem, not a technical one.
- You can submit reports, but real users can’t (or won’t). If your QA team has submitted test reports but nothing arrives from actual users or beta testers, the technical pipeline works. The human pipeline doesn’t.
Quick Diagnostic Steps
Step 1: Verify the SDK is initialized and your API token is valid. With Critic, hit POST /api/v2/ping with your product access token. If it returns success and installs appear in your dashboard, initialization is confirmed. Your tool’s equivalent health check endpoint serves the same purpose.
Step 2: Submit a test report from a physical device, not a simulator. The shake gesture fails to trigger reliably in the iOS Simulator, a known issue documented by IBM. If the test report arrives in your dashboard with full device telemetry, your technical pipeline is confirmed working.
If both checks pass, your problem is almost certainly in the first three causes below. If either check fails, skip ahead to Causes 4–7.
Common Causes (Most Likely to Least Likely)
Causes are ranked by how often they appear in practice. Start from the top; most readers will find their answer in the first two sections.
Cause 1: Your Users Have No Idea Shake-to-Report Exists
This is the most common cause by a wide margin, and it’s the one most developers overlook because they know the feature is there.
Gesture discoverability is a well-documented UX problem. As Smashing Magazine established in their analysis of mobile gestures: “gestures have a lower discoverability; they are always hidden and people need to be able to identify these options.” The Interaction Design Foundation reinforces this: “Hidden or undocumented gestures, even simple ones, can often go unused or come to light much later for users.” Without a visual hint that shaking triggers something, users will never discover it on their own.
The LinkedIn case makes this concrete. When LinkedIn built and open-sourced their “Shaky” library (shake-to-send-feedback for Android) they generated over 5,000 internal bug reports from employees in a single year. But these were employees who were explicitly told about the feature as part of the company’s dogfooding process. The library was purpose-built for internal use where discoverability was handled by announcement, not by UI design.
The Hacker News thread “Most users won’t report bugs unless you make it stupidly easy” confirmed the pattern from the user side, with hundreds of comments. The consensus: friction is the number-one barrier to bug reporting. As one commenter put it, reporting bugs is work, and if submission feels like a black hole, users won’t bother. Users who are unaware the feature exists face infinite friction.
There’s an emotional dimension too. Shaking a phone happens naturally when users are frustrated, but only if they know it triggers something. Otherwise, they shake in frustration, then open the App Store and leave a one-star review.
How to fix it:
-
Add a one-time onboarding tooltip on first app launch: “Found a bug? Shake your device to report it.” Keep it short; MessageGears’ research on mobile tooltips recommends no more than three lines of text. Show once, dismiss on tap, never show again. Refiner’s 2025 analysis of 1,382 in-app surveys found that center-screen modal prompts achieve a 42.6% response rate; a well-placed tooltip gets seen.
-
Add a visible feedback button as a complement to shake. A small floating action button or a “Report a Bug” item in your settings or help menu gives every user a discoverable entry point. Shake is convenient for power users who already know about it; a button is discoverable for everyone else.
-
Mention it in release notes and beta invite emails. One sentence: “New: shake your device to report bugs directly from the app.” Firebase’s documentation on collecting tester feedback emphasizes providing clear instructions for how testers should submit feedback. Don’t assume they’ll figure it out.
-
Critic-specific: Critic’s shake-to-report works out of the box with zero UI code; the built-in form appears automatically on shake. But “works out of the box” and “is discoverable out of the box” are different things. Add the tooltip or button yourself using your app’s UI framework, then let Critic handle the reporting flow, device telemetry capture, and log collection.
Cause 2: Reports Go Into a Void
Even users who discover the reporting feature will stop using it if they believe nobody reads their reports.
The HN bug reporting thread surfaced this pattern explicitly. Users described submitting detailed reports only to receive silence, or worse, automated closures from stale-issue bots months later. The consensus was clear: companies that acknowledge reports and fix reported bugs motivate more detailed future reporting. Companies that don’t kill the feedback pipeline from the user end.
A separate Hacker News discussion told a striking story: a bug in an internal tool had persisted for years. When it finally came up in a meeting, 100% of the client-facing team knew about it from personal experience, and 0% of the development team had ever heard of it. Users hadn’t reported it because they assumed developers already knew, didn’t think they’d be heard, or feared appearing incompetent. The feedback pipeline existed. The response loop didn’t.
For beta testing programs specifically, Moldstud’s research on beta testing strategies found that 68% of testers prefer real-time communication about their reports. When testers submit feedback and hear nothing back, they conclude the exercise is performative.
How to fix it:
-
Enable email notifications immediately. Every tool has this setting. Critic sends automatic email notifications for new bug reports and comments; make sure they’re turned on and going to a monitored inbox, not a team mailing list that nobody reads.
-
Respond to every report within 24 hours with a comment in the dashboard, even if it’s just “Thanks; we’re looking into this.” The bar is acknowledgment, not resolution. Users who see their report acknowledged are far more likely to submit again.
-
Notify users when their bug is fixed. If you have user contact info (via custom metadata like an email address or user ID injected into reports) send a brief message: “The issue you reported on [date] has been fixed in version X.Y.Z.” This transforms one-time reporters into repeat contributors.
-
For beta programs: send a weekly digest to testers showing what was fixed based on their feedback. Feature Upvote’s research on managing beta tester feedback recommends thanking prolific testers by name in changelogs or newsletters; making their contribution visible keeps them engaged.
Cause 3: Too Much Friction in the Submission Flow
If your feedback form asks for a title, category, priority level, description, steps to reproduce, and expected vs. actual behavior, your users will close it. Every required field is a reason to abandon.
The minimum viable feedback is a single sentence. Feature Upvote’s beta testing research confirms this: keep the minimum feedback requirement to a single sentence. Everything beyond that should be captured automatically by the SDK, not demanded from the user.
The friction problem compounds on mobile. Unlike web feedback where a user can switch tabs to grab a URL or screenshot, mobile users must leave the app entirely to gather supporting information. Embrace’s engineering blog puts it bluntly: “You shouldn’t expect users to deliver detailed bug reports.” Users lack both the technical expertise and the motivation to document bugs comprehensively. The tool must capture context automatically.
The data backs this up. Aqua Cloud’s research on mobile bug reporting found that apps with in-app feedback see up to 750% higher response rates compared to traditional support channels, primarily because they remove friction at the moment of frustration. Refiner’s 2025 analysis puts hard numbers on the platform gap: mobile in-app prompts achieve a 36.14% response rate versus 26.48% for web, confirming that meeting users where they already are dramatically increases participation.
How to fix it:
-
Make the description field the only required field. Everything else (device info, logs, screenshots, app version) should be captured automatically. Critic differentiates here: every report automatically includes battery level, memory metrics, disk space, network connectivity, OS version, CPU usage, and 500 lines of console logs without the user doing anything beyond typing their description.
-
Attach user identity programmatically instead of requiring authentication to submit. Critic’s arbitrary JSON metadata lets you inject user IDs, session tokens, feature flags, or any other context at SDK initialization, rather than asking users to log in before they can report a bug.
-
Pre-capture the screenshot. When the feedback form opens, the user should see a screenshot already attached. Critic’s native SDKs include built-in screenshot capture utilities that handle this automatically. If users have to take a screenshot manually, switch to the form, and attach it, most will abandon the process.
Cause 4: SDK Excluded from Production Builds
This is the most common technical cause, and it’s particularly insidious because everything works perfectly on the developer’s device.
How it happens: The feedback SDK dependency is scoped to debug-only configuration. On Android, this means debugImplementation instead of implementation in your Gradle file. On iOS, the pod is conditionally included only for debug configurations in the Podfile. On Flutter, the package ends up under dev_dependencies: instead of dependencies: in pubspec.yaml. The SDK compiles into your development builds, you test it, it works, and it’s completely absent from the production APK or IPA your users download.
The ProGuard/R8 trap (Android): Even if the dependency is correctly scoped to all build variants, ProGuard or R8 code shrinking can strip or obfuscate SDK classes in release builds. If the SDK uses reflection (and many do for JSON parsing and annotation processing) R8 may rename or remove classes it considers unused, causing silent failures. No crash, no error log, just no feedback form. This is a well-documented pattern across third-party Android SDKs: everything works in debug where R8 is off by default, and silently breaks in release.
How to fix it:
-
Android: Confirm the dependency uses
implementation(notdebugImplementation) in yourbuild.gradle. Add ProGuard/R8 keep rules for the SDK’s packages (e.g.,-keep class io.inventiv.critic.** { *; }for Critic). -
iOS: Check your Podfile for conditional configuration blocks that might exclude the Critic pod from release builds.
-
Flutter: Open
pubspec.yamland verifyinventiv_critic_flutteris underdependencies:, notdev_dependencies:. -
Verification: Install the release/production build on a physical device and attempt to trigger the feedback form. If it fails to appear, the SDK isn’t in the build.
Cause 5: Incorrect API Token or Initialization Error
Many SDKs fail silently when the API token is invalid: no crash, no error dialog, just no feedback form. The SDK initializes, detects the bad token on the first API call, and quietly disables itself. You won’t see a stack trace because the SDK handled the error gracefully (too gracefully).
Common variations:
- Environment mismatch: Using a staging API token in a production build, or a production token in development.
- Copy-paste errors: A trailing space or newline character in the token string, invisible in your IDE.
- Initialization order: Some SDKs require being initialized before other frameworks to avoid swizzling conflicts on iOS. Initialize your feedback SDK early in the app lifecycle to avoid conflicts with other libraries that may intercept the same system hooks.
How to fix it:
-
Verify the API token directly by calling Critic’s
POST /api/v2/pingendpoint with your token. If it returns an error, the token is wrong or expired. -
Check initialization order. Ensure the bug reporting SDK is initialized early in the app lifecycle: in
Application.onCreate()(Android),application(_:didFinishLaunchingWithOptions:)(iOS), ormain()(Flutter), before other framework initializations. -
Audit environment-specific tokens. If you use different API tokens per environment, confirm the production build is using the production token. A build configuration mismatch here will silently break all user-facing feedback.
Cause 6: Shake Detection Disabled or Unreliable
A developer disabled shake detection for a specific screen (a game scene with motion controls, a map with tilt gestures, a fitness feature using the accelerometer) and forgot to re-enable it. Or a configuration flag was set to false during debugging and never toggled back.
The device sensitivity problem: Even when shake detection is enabled, devices respond differently. Accelerometer sensitivity varies between manufacturers and models; some phones require a significantly more vigorous shake than the developer’s primary test device. Testing only on a flagship phone in the office misses the long tail of budget and mid-range devices your users actually carry.
How to fix it:
-
Grep your codebase for any programmatic disable of shake detection. Search for
isAllowShake,setShakeEnabled,enableShake(false), or your SDK’s equivalent configuration flag. -
Test on multiple physical devices, not just your primary development phone. The shake threshold varies meaningfully between manufacturers and models.
-
Provide a fallback trigger. A visible button or menu item ensures that even when shake detection fails or feels unreliable on a particular device, users still have a path to submit feedback.
Cause 7: Missing Network Permissions (Android)
The INTERNET permission is missing from AndroidManifest.xml, or the permission declaration has a case-sensitivity error. Android silently denies the network request: the feedback form appears, the user writes their report, taps submit… and nothing happens. No error message, no retry prompt. The report vanishes.
The debug-vs-release divergence: In some cross-platform frameworks, internet permissions are automatically added for debug builds but omitted from release builds. Everything works in development. Everything fails silently in production.
How to fix it:
-
Verify
<uses-permission android:name="android.permission.INTERNET"/>is present in yourAndroidManifest.xmlwith correct casing. -
Test the full submission flow (not just SDK initialization) on a release build on a physical device. Submit a report and verify it appears in the dashboard.
Step-by-Step Resolution (Quick-Reference Checklist)
If you want the fast version, work through this list in order. Each step maps to the detailed cause above:
- Submit a test report from a physical device to confirm the entire pipeline works end-to-end (Causes 5, 6, 7)
- Verify the ping endpoint / install registration to confirm the SDK is initialized and the token is valid (Cause 5)
- Install the production build and try to trigger the feedback form to confirm the SDK is present in release builds (Cause 4)
- Search your codebase for shake disable flags to confirm shake isn’t programmatically disabled (Cause 6)
- Ask three beta testers: “Do you know you can shake your phone to report a bug?” to confirm discoverability (Cause 1)
- Check your dashboard for unanswered reports to confirm you aren’t losing repeat reporters to silence (Cause 2)
- Count the required fields in your feedback form. If it’s more than one (description), you have a friction problem (Cause 3)
If the Problem Persists
When none of the seven causes above explain your empty dashboard:
- Check SDK-specific issue trackers. GitHub issues for Critic’s Android SDK, Critic’s Flutter SDK, or whichever tool you’re using may document known bugs or device-specific incompatibilities.
- Inspect network traffic. Use Charles Proxy or Android Studio’s Network Inspector to confirm the bug report HTTP request is actually being sent and what response the server returns. A 401 means bad token. A timeout means network issue. A 200 with no dashboard entry means a server-side processing problem.
- What to include in a support request: SDK version, platform and OS version, build type (debug or release), API token validation result (ping response), device model, and whether the feedback form appears at all vs. appears but submission fails silently.
- Critic-specific: Check the API docs for endpoint troubleshooting, or reach out through the web dashboard.
Prevention: Ensuring High Submission Rates from Day One
This checklist ensures your feedback pipeline works from the moment you ship. Don’t wait for an empty dashboard to diagnose.
-
Verify SDK initialization on every build variant. Add a CI check that builds release and confirms the feedback SDK dependency is included; not just that it compiles, but that the SDK classes are present in the final artifact.
-
Test the full submission flow on a physical device before every release. Submit a report and verify it arrives in the dashboard with device telemetry attached.
-
Add a one-time onboarding tooltip teaching users about shake-to-report on first app launch. One sentence, dismiss on tap, never show again.
-
Add a visible fallback trigger (a settings menu item, a help screen option, or a floating button) in addition to shake. Discoverability beats elegance.
-
Enable email notifications for new reports so your team responds within 24 hours. An unmonitored dashboard is the same as no dashboard.
-
Brief your beta testers explicitly. In your beta invite email, include one sentence: “Found a bug? Shake your phone to report it instantly; we’ll get device info and logs automatically.”
-
Attach custom metadata automatically. Inject user IDs, session data, and app state via the SDK so you can proactively follow up with testers who haven’t submitted anything. Critic’s arbitrary JSON metadata accepts any key-value pairs you need: user email, subscription tier, feature flags, A/B test variant.
-
Send a weekly update to beta testers showing what you’ve fixed based on their feedback. Moldstud’s research found that 68% of testers prefer real-time communication; a weekly digest is the minimum viable feedback loop.
-
Use tools that minimize user effort. Critic’s automatic capture of battery, memory, disk, network, OS, 500 lines of console logs, and screenshots means the user’s only job is typing one sentence. The less you ask of users, the more you’ll hear from them.
-
Close the loop. Every report acknowledged. Every fix communicated. Users who feel heard become your most reliable testers.
The pattern across all ten items is the same: the feedback pipeline has two ends. Most developers optimize the technical end (SDK initialization, API tokens, build configuration) and neglect the human end. An empty dashboard almost always means the human end needs work. The tools that win are the ones that make both ends effortless: one line of code for the developer, one shake for the user, and automatic device context that neither of them had to think about.
Start a free 30-day trial of Critic: one line of code, automatic device telemetry, and your first actionable bug report in minutes. No credit card required.