Rutter homepage image.png

Enhance visibility and streamline debugging: Rutter's Request Logs feature

 

Enhance visibility and streamline debugging: Rutter's Request Logs

About Rutter

Rutter is a unified API that simplifies integrations with accounting, commerce, and payment platforms. In today's digital landscape, companies rely on real-time business data to serve customers effectively. However, retrieving this fragmented data across numerous platforms has historically been a significant challenge.

Rutter solves this problem by offering a single API that connects to multiple data sources, supporting a wide range of use cases.

The problem

As Rutter's user base rapidly grew, a major bottleneck emerged: when users encountered failed requests while building on the Rutter API, they received minimal insight into the root cause, hindering effective troubleshooting. This lack of transparency often led to resolution times spanning hours or even days.

Compounding the issue, Rutter's engineering teams had to manually investigate and debug these complex errors, consuming over 20% of their bandwidth. Frustration mounted for both customers and internal teams, impacting productivity and the overall developer experience.


The goals

To address these challenges, Rutter initiated a project aimed at streamlining the debugging process, reducing resolution times, and providing a more seamless experience for all involved.

The primary goal was to integrate Request Logs into the Rutter Dashboard, empowering customers to troubleshoot independently while improving Rutter's ability to investigate and resolve complex failures efficiently.

Role and team

I spearheaded all design efforts, from discovery through execution, including leading discovery activities, crafting lo and high-fidelity designs, moderating user tests, and synthesizing insights for iterations.

The core team consisted of the Founder & CEO, a front-end engineer, and a technical lead.


Discovery

The initial phase focused on gaining a deeper understanding of the problem space, target users, and desired outcomes. My goal was to align with key stakeholders and gather insights to inform the design process.

I conducted stakeholder interviews with core team members and engineers responsible for debugging customer issues, diving deep into the current process, identifying assumptions, and discussing potential solutions.

Additionally, I reviewed existing user feedback and documentation, such as Slack threads, Zendesk tickets, and internal documents. This provided valuable context, past examples, and insights into users' challenges.

Key learnings from discovery:

Who are the users? Who will be impacted?

  • Engineers building integrations with Rutter: Developers from companies like Ramp and Mercury, who integrate their systems with Rutter's API to enable seamless data exchange and access.

  • Rutter's engineering team: The internal team responsible for supporting customers, debugging issues, and platform maintenance.

Key issues identified:

  1. Limited visibility and data logging: The system stored information about incoming requests but did not log responses or intermediate steps, leaving both customers and Rutter's engineering team lacking visibility into the exact point of failure.

    • In some cases, users did not receive error messages despite issues or unexpected behavior (e.g. semantic errors), making it difficult to understand why they were not receiving the expected outcome.

  2. Cumbersome debugging process:

    • Rutter engineers often had to manually sift through logs to identify the root cause of errors, resulting in time-consuming efforts and delays in issue resolution.

    • In many cases, they had to recreate the issue locally to investigate further, adding complexity.

  3. Inadequate information for debugging:

    • Users received status codes indicating success or failure, but error messages often lacked detailed explanations, making it challenging to understand or resolve issues.

    • The quality of error information varied significantly, with some messages providing minimal details while others were more descriptive.

    • Efforts were made to deliver third-party platform error messages to users when possible, but this was inconsistent.

    • Instances where customers received poorly formatted errors (e.g. errors Rutter has not seen before), leaving them uncertain about how to proceed.

    • Improving error messaging was deemed out of scope for this project.

 
 

Problem framing

Armed with a better understanding of the problem space and challenges faced by users (internally and externally), I framed the problem statement to guide our next steps:

"How might we develop a solution that provides comprehensive visibility into requests and empowers users to confidently troubleshoot issues, saving time and frustration for both Rutter's team and customers?"

Core ideas

During our exploration of solutions to enhance visibility and streamline debugging, several core ideas emerged:

  1. Request IDs:

    • Introduce unique IDs for every request

    • Include these IDs in all API responses, regardless of status

    • Utilize IDs for internal logging and tracing

  2. Comprehensive request logging:

    • Implement internal logging to track outgoing responses

    • Log intermediate data and steps in an easy-to-process format

  3. Exposing log data through the Dashboard:

    • Include Request Logs within the Rutter Dashboard for user and internal team access

    • Enable search and filtering capabilities to locate specific requests (e.g. filter by time range, ID, status code)

    • Provide multiple views (list and detailed) for navigating request logs

Assumptions

Key assumptions made by the team regarding user access, expected benefits, and potential impact on support processes:

  • Users will access Request Logs through the Rutter Dashboard

  • Providing access to comprehensive log data will equip users to find requests, gain insights into inputs/outputs, and debug more effectively

  • Improved transparency aims to demystify the platform, reducing reliance on support and fostering user autonomy

  • For complex issues, users may still need support, but improved observability will reduce the overall burden and enable more efficient collaboration

Measuring success

  • Reduce the number of debugging-related tickets by 10%

  • Decrease the average time spent by the Rutter team on debugging and resolving issues

 
 

Current & desired workflow mapping

At this phase, I mapped out the current interactions, documenting what users did at each stage and how the Rutter team responded.

Current interactions

Current state

I then contrasted it with the desired future flow, incorporating Request Logs and envisioning the ideal process.

This exercise allowed me to consolidate learnings, ensure team alignment, and identify remaining gaps, such as:

  • How might users navigate to Request Logs? By connection or across all requests?

  • Does request recency matter to users?

  • Search/filtering options: Order? Prioritization? Nested or hierarchical structure?

User flow (future state)

Future flow

Our discussions led to the key decision to store log data at the top level, with Request Logs in the left menu bar for better discoverability and convenience.

The workflow maps also served as a foundation for subsequent design exploration, guiding discussions and informing potential solutions.

Design exploration

Next, I created mockups focusing on content, layout, and information flow. Working closely with the founder and engineer partners, we navigated various options and decisions.

Design exploration screenshot

Design exploration

Request Logs table view:

  • Column prioritization: Timestamp first (default sort), followed by Request ID, platform, method, path, status code

  • Visual enhancements: Color-coded methods, platform logos for easier differentiation

Filtering and lookup options:

  • Users may not always remember request IDs, prompting exploration of additional criteria (e.g. types, date ranges)

  • Combination filters: Platform, status code, time frames (e.g., all NetSuite requests in the last 3 days)

Request detailed view: Visualizing the chronological data flow and intermediary step relationships presented a challenge. Four key stages:

  1. Customer -> Rutter: "Rutter Request"

  2. Rutter -> Platform: "Platform Request"

  3. Platform -> Rutter: "Platform Response"

  4. Rutter -> Customer: "Rutter Response"

I noticed terminology inconsistencies (e.g., "Rutter Request" vs. "Rutter Input") across the team, which could lead to user comprehension issues. To avoid introducing new terms, I suggested using simple visuals like arrow icons to show relationships.

Directions explored for visualizing log data:

  • Grouping requests into two, stacking vertically

  • Presenting requests side-by-side

  • Emphasizing chronological order of the 4 stages, using tabs

For each request, I introduced a summary section highlighted key information. Initially explored using tabs to separate summary from request/response, but decided against tabs to reduce clicks and provide all relevant information on one page.

 
 

Card sorting and user testing

To validate assumptions and gather feedback, we conducted testing with 5 participants:

  • N = 5, 40 min sessions, moderated via Zoom

  • Participants: Companies with strong engineering teams and error-prone write operations, likely to benefit significantly from Request Logs.

My Role: I designed and moderated the research sessions, guiding participants through card sorting exercises and scenario-based tasks. I also involved engineering partners to ensure a cross-functional perspective and expose them to first-hand user feedback.

Post-sessions, we debriefed as a team, discussing key findings and areas for further exploration. This collaborative approach allowed us to effectively synthesize insights and refine the design based on user feedback and technical constraints.

Card sorting activities with 5 participants

Card sorting

Interactive prototypes screenshot

Interactive prototypes for usability testing

Key findings

Card sorting:

  • While most information was deemed valuable for debugging, some elements like API versions were considered less crucial as they could often be found in request headers.

  • Participants categorized request information into 2 main types:

    • Identifiers like access token, connection ID, or request ID. Interestingly, request IDs were rarely mentioned unprompted, indicating they may not be the first aspect users consider when troubleshooting.

    • Information for debugging:

      • Error details stood out as most critical for participants

      • Detailed explanation of intermediate steps, including platform input/output

      • Some didn't value status codes, believing they are often mapped incorrectly

Usability testing:

  • Understanding Request Logs' purpose:

    • Ps understood it as a centralized location to view all requests sent to Rutter, covering all platforms

    • This centralized approach eliminates the need to sift through logs from multiple third-party APIs

  • How users discover errors:

    • Through customer feedback - a primary source of error detection

    • Using internal logging tools - while preferred, Rutter logs were seen as useful for customer issues

  • Rethinking Request IDs' importance:

    • Many customers discover issues via their own customers and don't have request IDs on hand

    • Rather than using request IDs, customers rely on contextual information (platform, type, timestamp) to pinpoint problematic requests

    • Request IDs are still useful for direct communication with Rutter support or independently discovered errors

  • Filtering & search expectations:

    • Ability to filter by identifiers (connection ID, path, endpoint), preferring connection IDs over access tokens for security

    • The following filters were valuable for locating specific failed requests: timestamp, platform, status code, request type

    • Some wanted granular control over time ranges (seconds, minutes, hours)

  • Timestamp format preference:

    • ISO standard format preferred over human-readable for consistency and compatibility across systems

  • Request detailed view:

    • Layout perceived as intuitive, with platform response quadrant providing crucial info for troubleshooting

    • Information sufficient for debugging, including input/output between Rutter and platforms

    • Quick-view access was desired, preferably in a pop-out or accordion format

User testing session Zoom screenshot

Final designs

Results and impacts

The implementation of the Request Logs feature has resulted in tangible improvements for users and the Rutter team:

  1. Enhanced problem-solving capabilities for customers.

  2. Improved ability for Rutter to debug complex API failures.

  3. Reduced customer uncertainty and frustration, as errors are no longer a black box.

Since its rollout, we've received multiple positive feedback from internal stakeholders and our customers' engineering teams regarding the enhanced debugging process. Additionally, we've observed a 16% decrease in support tickets related to requests.

Although not quantitatively tracked, our engineering team noticed a significant reduction in time spent digging into bugs and untangling complex issues, especially when handling POST requests.

Learnings

  • Working within tight deadlines: The timeline left little room for traditional processes like conducting user interviews during discovery. Instead, I heavily relied on existing information and past examples to inform decisions. While challenging, it pushed us to be resourceful and creative in finding solutions under significant time constraints.

  • Navigating a new domain: Working in the highly technical unified API domain was a significant learning experience. Despite initially lacking technical knowledge of debugging processes, I quickly immersed myself in learning. My lack of prior expertise allowed me to bring fresh perspectives and ask critical questions that helped uncover gaps in our understanding.

  • Balancing solution exploration with understanding customer needs: A key takeaway was the importance of understanding how customers discover and troubleshoot issues before reaching out to Rutter. It was eye-opening to realize that customers often identify problems through interactions with their own users before turning to Rutter. This insight underscored the need for deeper exploration into typical customer-end-user interactions, utilization of internal logging tools, and the timing/triggers for reaching out to Rutter for support.

  • Relationship between Request Logs and other areas: Various parts of the Rutter API can potentially break besides failed requests. One participant expressed a desire to see links between Webhooks, connections, and requests to better understand the interconnectedness of these components and streamline troubleshooting processes. Further exploration into the relationship between Request Logs and other sections like Connections and Webhooks presents an opportunity for improvement.