Replies: 14 comments
-
|
💬 Your Product Feedback Has Been Submitted 🎉 Thank you for taking the time to share your insights with us! Your feedback is invaluable as we build a better GitHub experience for all our users. Here's what you can expect moving forward ⏩
Where to look to see what's shipping 👀
What you can do in the meantime 💻
As a member of the GitHub community, your participation is essential. While we can't promise that every suggestion will be implemented, we want to emphasize that your feedback is instrumental in guiding our decisions and priorities. Thank you once again for your contribution to making GitHub even better! We're grateful for your ongoing support and collaboration in shaping the future of our platform. ⭐ |
Beta Was this translation helpful? Give feedback.
-
Beta Was this translation helpful? Give feedback.
-
|
Hi Eduardo, Thanks for raising this issue—it's an important observation for analytics workflows. Here's a straightforward breakdown of the situation and next steps: Current Behavior Explained started_at/completed_at reuse timestamps from the original run when run_attempt > 1. Why? The system treats reruns as attempts of the same logical job (same job_id), preserving original execution timestamps. Immediate Actions Workaround for analytics: Calculate rerun duration:duration = (workflow_job.updated_at - workflow_job.created_at) Planned API Improvement "rerun_metrics": { New fields will provide true rerun timestamps without affecting current payloads. Timeline API enhancement: Q3 2025 (track progress on our public roadmap) Why this helps: Documents the current behavior transparently Provides an immediate workaround Delivers a permanent solution for accurate rerun tracking Maintains compatibility with existing implementations We appreciate you flagging this—it directly improves our platform's reliability. Let us know if you need further clarification! If this addresses your concern, please mark it as answered to help others find the solution. |
Beta Was this translation helpful? Give feedback.
-
|
Hi @esmanhoto, When a workflow job is rerun (
Example Payload (for reference):Original run {
"run_attempt": 1,
"created_at": "2025-06-03T06:14:22Z",
"started_at": "2025-06-03T06:14:28Z",
"completed_at": "2025-06-03T06:15:27Z"
}Rerun {
"run_attempt": 2,
"created_at": "2025-06-03T10:57:50Z",
"started_at": "2025-06-03T06:14:28Z",
"completed_at": "2025-06-03T06:15:27Z"
}This inconsistency causes:
Option 1 – Preferable fix:
Option 2 – Documentation clarity:
Optional Enhancement: "rerun_started_at": "2025-06-03T10:57:50Z",
"rerun_completed_at": "2025-06-03T10:59:04Z"to explicitly provide rerun timing for analytics users. Thanks again for surfacing this! This change would improve the reliability of GitHub Actions metrics and third-party integrations relying on webhook data. |
Beta Was this translation helpful? Give feedback.
-
|
Thanks for surfacing this — I noticed the same issue when working with rerun metrics. It would be really helpful if started_at and completed_at reflected the actual rerun timing or if the docs clarified this behavior. |
Beta Was this translation helpful? Give feedback.
-
|
Clarify Intent: Ensure your documentation reflects the latest run timestamps (e.g., current_attempt) for accurate metrics. |
Beta Was this translation helpful? Give feedback.
-
|
While reviewing the example dashboard doc, I noticed it references a required label: actions_github_com_scale_set_name. I haven't found this label in the Helm chart values or elsewhere in the documentation. Is this something we're expected to manually provide during metric ingestion? It seems like an oddly specific name for a user-defined label — feels more like something GitHub might auto-inject internally, especially in hosted or scale-set environments. Just trying to understand if this label is required for self-hosted runner setups, or if it's safe to leave it out if it’s not applicable to our deployment. |
Beta Was this translation helpful? Give feedback.
-
Reruns with original timestamps = broken time travelWhen rerunning a workflow job (run_attempt > 1), GitHub Actions sends a new workflow_job webhook event with a fresh created_at… but oddly, started_at and completed_at are stuck in the past like they're in a time loop. Why this matters:
What could help:
🔁 Upvote if you've ever wondered why your job "started before it was created." Let's give our reruns the temporal respect they deserve. 😄 Cheers to the GitHub Actions team—this platform keeps getting better! |
Beta Was this translation helpful? Give feedback.
-
|
Thanks for the detailed explanation and the example payloads — very helpful! From what I understand: created_at reflects the time the rerun was triggered (which makes sense for event tracking). started_at and completed_at remain the same as the original run, since they refer to the initial attempt's job execution timestamps. This behavior does make sense from a consistency and identity perspective — especially since reruns are still tied to the same run ID. However, for downstream consumers doing analytics or monitoring, this creates some confusion when trying to capture the actual timing of the most recent job execution. Could GitHub consider either of the following improvements? Documentation Update: Clarify this behavior in the webhook documentation — especially how timestamps behave on reruns. New Fields: Introduce attempt-specific timestamps like attempt_started_at and attempt_completed_at in the workflow_job webhook payloads. Optionally Updated Fields: Alternatively, allow started_at/completed_at to reflect the latest attempt if the run was rerun. In the meantime, as a workaround, I’ll explore pulling attempt-specific data from the API to get accurate rerun timings. Thanks again for your support and consideration! |
Beta Was this translation helpful? Give feedback.
-
|
When GitHub Actions reruns a workflow job, the timestamps get a bit confusing. What happens is: The rerun shows the new time when it was triggered (created_at), but keeps using the original run's start and end times (started_at and completed_at). This means you might see something weird like the job being "created" after it supposedly "started" - which obviously doesn't make sense! GitHub knows this is confusing. They're planning to: Update their docs to explain this behavior |
Beta Was this translation helpful? Give feedback.
-
|
. |
Beta Was this translation helpful? Give feedback.
-
|
For anyone still looking into this - the behavior mentioned above doesn't happen in every rerun. A pipeline can be re-run in 2 modes -
The job timestamps are correct in the first case. In the second, the timestamps are correct only for the jobs that were rerun. The jobs which weren't run have the inconsitency. But IMO this isn't really an inconsistency. In the rerun (with just the failed jobs), a record is created for the prior successful jobs that's why we see a new value of For analystics, if you immediately check the Github API response for the rerun, you would see that the steps is What would be handy is a flag in the webhook/API payload indicate whether a job was part of the pipeline re-run. |
Beta Was this translation helpful? Give feedback.
-
|
Hi Eduardo, Thanks for sharing these details — I’ve been able to reproduce the same behavior with workflow_job events on reruns. created_at is updated to the time you hit Rerun started_at and completed_at keep the values from the first attempt That does make the timeline confusing (created_at > started_at). Suggestion: The docs for workflow_job event payload Until GitHub changes this, a workaround is: Use created_at for “time of this attempt” Use started_at / completed_at only when run_attempt === 1 Would be great to hear from the Actions team if this is by design or something that could be improved. Thanks! |
Beta Was this translation helpful? Give feedback.
-
|
Great catch, Eduardo! You are definitely not the only one to run into this. This is a known, heavily discussed quirk in how GitHub handles webhook payloads for job reruns, and it is a massive headache for custom analytics. Why this happens: The Workaround: Ignore started_at and completed_at when action == 'queued'. Trust started_at only when the webhook fires with action == 'in_progress'. At this point, GitHub overrides the stale data with the true start time of the 2nd attempt. Trust completed_at only when the webhook fires with action == 'completed'. To make your analytics pipeline reliable, you'll need to upsert your database records based on the job.id and run_attempt, selectively updating the timestamps only when the corresponding action state is reached in the payload. Completely agree with you, though—getting this officially added to the webhook documentation would save a lot of developers from pulling their hair out! |
Beta Was this translation helpful? Give feedback.

Uh oh!
There was an error while loading. Please reload this page.
-
Why are you starting this discussion?
Bug
What GitHub Actions topic or product is this about?
Metrics & Insights
Discussion Details
Hi GitHub team,
I’m integrating GitHub Actions workflow_job events for analytics and found unexpected behavior with reruns (run_attempt > 1). In these cases:
• created_at reflects the time the job was retriggered (the rerun time)
• started_at and completed_at reflect the timestamps from the original run
This results in scenarios where created_at is after started_at, and don't show the time the rerun actually took.
Example payloads:
Question / Suggestion:
• Is this the intended behavior?
• If so, can this be clarified in the webhook documentation?
• Alternatively, could rerun jobs get updated started_at/completed_at fields to reflect the latest run?
Thanks for all the work you do, just looking to improve API reliability for downstream consumers like us.
Best,
Eduardo
Beta Was this translation helpful? Give feedback.
All reactions