19709980
CustomerApi.Jobs.GenerateAccountMetrics
Queue
clickhouse_account_metrics
Attempt
6 of 10
Priority
0
Tags
...
Node
customer_api@10.10.1.109
Queue Time
00:00.233
Run Time
01:07.047
Inserted
17h ago
Scheduled
15h ago
Completed
15h ago (01:08)
Cancelled
—
Discarded
—
Args
%{
"account_id" => "46137",
"date" => "2026-03-05",
"query" => "users_reached",
"window_days" => 30
}
Meta
%{
"deps" => ["generate_event_counts"],
"name" => "generate_users_reached",
"on_hold" => false,
"orig_scheduled_at" => 1772673538387508,
"recorded" => true,
"return" => "g1AAAABXeJwrYWBgYC7nLS1OLSqOL0pNTM5ITUliYBA5VS6dVJqZk5JaFJ9alppXEo+ugjG9XCKvoDi+uLSoLLUSVTqRAQAeux8j",
"structured" => true,
"workflow" => true,
"workflow_id" => "019cbb59-1856-7db5-b592-c75a4c7b8548"
}
Recorded Output
%{
users_reached: 5322,
builder_event_users_reached: 359,
nps_survey_users_reached: 0
}
Errors
Attempt 5—15h ago
** (FunctionClauseError) no function clause matching in Ch.RowBinary.decode_names/4
The following arguments were given to Ch.RowBinary.decode_names/4:
# 1
"ial build))\n"
# 2
59
# 3
67
# 4
["ct(pool: PrefetchedReadPool, algorithm: Thread). (MEMORY_LIMIT_EXCEEDED) (version 25.10.1.7375 (offic", "3WithKeeperDisk of type s3, from mark 31839 with max_rows_to_read = 4451, offset = 0): While executing MergeTreeSel", "0ce73-d792-4409-8626-e5e307da7fa9) located on disk ", "prod_events.event_history_v2 (5f", "ading from part data/5f30ce73-d792-4409-8626-e5e307da7fa9/all_16465092_16481137_11_22488012/ in table", "ision: Query was selected to stop by OvercommitTracker: (while reading column attributes): (while r", "iB), current RSS: 319.01 GiB, maximum: 320.40 GiB. OvercommitTracker de", "de: 241. DB::Exception: (total) memory limit exceeded: would use 321.01 GiB (attempt to allocate chunk of 2.00 "]
(ch 0.7.1) lib/ch/row_binary.ex:789: Ch.RowBinary.decode_names/4
(ch 0.7.1) lib/ch/query.ex:290: DBConnection.Query.Ch.Query.decode/3
(db_connection 2.9.0) lib/db_connection.ex:1470: DBConnection.decode/4
(db_connection 2.9.0) lib/db_connection.ex:849: DBConnection.execute/4
(ch 0.7.1) lib/ch.ex:94: Ch.query/4
(ecto_sql 3.13.4) lib/ecto/adapters/sql.ex:620: Ecto.Adapters.SQL.query!/4
(ecto_ch 0.8.6) lib/ecto/adapters/clickhouse.ex:323: Ecto.Adapters.ClickHouse.execute/5
(ecto 3.13.5) lib/ecto/repo/queryable.ex:241: Ecto.Repo.Queryable.execute/4
Attempt 4—15h ago
** (FunctionClauseError) no function clause matching in Ch.RowBinary.decode_names/4
The following arguments were given to Ch.RowBinary.decode_names/4:
# 1
"chedReadPool, algorithm: Thread). (MEMORY_LIMIT_EXCEEDED) (version 25.10.1.7375 (official build))\n"
# 2
61
# 3
67
# 4
["rom mark 24991 with max_rows_to_read = 5159, offset = 0): While executing MergeTreeSelect(pool: Prefet", "rod_events.event_history_v2 (5f30ce73-d792-4409-8626-e5e307da7fa9) located on disk s3WithKeeperDisk of type s3, ", "ading from part data/5f30ce73-d792-4409-8626-e5e307da7fa9/all_15196997_15207370_9_22488012/ in table ", "ision: Query was selected to stop by OvercommitTracker: (while reading column attributes): (while r", "iB), current RSS: 330.78 GiB, maximum: 320.40 GiB. OvercommitTracker de", "de: 241. DB::Exception: (total) memory limit exceeded: would use 334.78 GiB (attempt to allocate chunk of 4.00 "]
(ch 0.7.1) lib/ch/row_binary.ex:789: Ch.RowBinary.decode_names/4
(ch 0.7.1) lib/ch/query.ex:290: DBConnection.Query.Ch.Query.decode/3
(db_connection 2.9.0) lib/db_connection.ex:1470: DBConnection.decode/4
(db_connection 2.9.0) lib/db_connection.ex:849: DBConnection.execute/4
(ch 0.7.1) lib/ch.ex:94: Ch.query/4
(ecto_sql 3.13.4) lib/ecto/adapters/sql.ex:620: Ecto.Adapters.SQL.query!/4
(ecto_ch 0.8.6) lib/ecto/adapters/clickhouse.ex:323: Ecto.Adapters.ClickHouse.execute/5
(ecto 3.13.5) lib/ecto/repo/queryable.ex:241: Ecto.Repo.Queryable.execute/4
Attempt 3—15h ago
** (FunctionClauseError) no function clause matching in Ch.RowBinary.decode_names/4
The following arguments were given to Ch.RowBinary.decode_names/4:
# 1
"ial build))\n"
# 2
59
# 3
67
# 4
["ct(pool: PrefetchedReadPool, algorithm: Thread). (MEMORY_LIMIT_EXCEEDED) (version 25.10.1.7375 (offic", "3WithKeeperDisk of type s3, from mark 36468 with max_rows_to_read = 5232, offset = 0): While executing MergeTreeSel", "f30ce73-d792-4409-8626-e5e307da7fa9) located on disk ", "792-4409-8626-e5e307da7fa9/all_15637351_15649468_9_22488012/ in table prod_events.event_history_v2 (", "reading from part data/5f30ce73-", "ision: Waiting timeout for memory to be freed is reached: (while reading column attributes): (while", "iB), current RSS: 274.67 GiB, maximum: 320.40 GiB. OvercommitTracker de", "de: 241. DB::Exception: (total) memory limit exceeded: would use 334.15 GiB (attempt to allocate chunk of 4.00 "]
(ch 0.7.1) lib/ch/row_binary.ex:789: Ch.RowBinary.decode_names/4
(ch 0.7.1) lib/ch/query.ex:290: DBConnection.Query.Ch.Query.decode/3
(db_connection 2.9.0) lib/db_connection.ex:1470: DBConnection.decode/4
(db_connection 2.9.0) lib/db_connection.ex:849: DBConnection.execute/4
(ch 0.7.1) lib/ch.ex:94: Ch.query/4
(ecto_sql 3.13.4) lib/ecto/adapters/sql.ex:620: Ecto.Adapters.SQL.query!/4
(ecto_ch 0.8.6) lib/ecto/adapters/clickhouse.ex:323: Ecto.Adapters.ClickHouse.execute/5
(ecto 3.13.5) lib/ecto/repo/queryable.ex:241: Ecto.Repo.Queryable.execute/4
Attempt 2—15h ago
** (FunctionClauseError) no function clause matching in Ch.RowBinary.decode_names/4
The following arguments were given to Ch.RowBinary.decode_names/4:
# 1
"chedReadPool, algorithm: Thread). (MEMORY_LIMIT_EXCEEDED) (version 25.10.1.7375 (official build))\n"
# 2
61
# 3
67
# 4
["rom mark 32577 with max_rows_to_read = 4679, offset = 0): While executing MergeTreeSelect(pool: Prefet", "rod_events.event_history_v2 (5f30ce73-d792-4409-8626-e5e307da7fa9) located on disk s3WithKeeperDisk of type s3, ", "ading from part data/5f30ce73-d792-4409-8626-e5e307da7fa9/all_16157678_16172147_9_22488012/ in table ", "ision: Query was selected to stop by OvercommitTracker: (while reading column attributes): (while r", "iB), current RSS: 345.42 GiB, maximum: 320.40 GiB. OvercommitTracker de", "de: 241. DB::Exception: (total) memory limit exceeded: would use 347.42 GiB (attempt to allocate chunk of 2.00 "]
(ch 0.7.1) lib/ch/row_binary.ex:789: Ch.RowBinary.decode_names/4
(ch 0.7.1) lib/ch/query.ex:290: DBConnection.Query.Ch.Query.decode/3
(db_connection 2.9.0) lib/db_connection.ex:1470: DBConnection.decode/4
(db_connection 2.9.0) lib/db_connection.ex:849: DBConnection.execute/4
(ch 0.7.1) lib/ch.ex:94: Ch.query/4
(ecto_sql 3.13.4) lib/ecto/adapters/sql.ex:620: Ecto.Adapters.SQL.query!/4
(ecto_ch 0.8.6) lib/ecto/adapters/clickhouse.ex:323: Ecto.Adapters.ClickHouse.execute/5
(ecto 3.13.5) lib/ecto/repo/queryable.ex:241: Ecto.Repo.Queryable.execute/4
Attempt 1—15h ago
** (FunctionClauseError) no function clause matching in Ch.RowBinary.decode_names/4
The following arguments were given to Ch.RowBinary.decode_names/4:
# 1
"ial build))\n"
# 2
59
# 3
67
# 4
["ct(pool: PrefetchedReadPool, algorithm: Thread). (MEMORY_LIMIT_EXCEEDED) (version 25.10.1.7375 (offic", "3WithKeeperDisk of type s3, from mark 34097 with max_rows_to_read = 6174, offset = 0): While executing MergeTreeSel", "0ce73-d792-4409-8626-e5e307da7fa9) located on disk ", "prod_events.event_history_v2 (5f", "ading from part data/5f30ce73-d792-4409-8626-e5e307da7fa9/all_14394994_14407841_27_22488012/ in table", "commitTracker decision: Memory overcommit has not freed enough memory: (while reading column attributes): (while r", "2 MiB), current RSS: 128.46 GiB, maximum: 320.40 GiB. Ove", "de: 241. DB::Exception: (total) memory limit exceeded: would use 338.30 GiB (attempt to allocate chunk of 1023."]
(ch 0.7.1) lib/ch/row_binary.ex:789: Ch.RowBinary.decode_names/4
(ch 0.7.1) lib/ch/query.ex:290: DBConnection.Query.Ch.Query.decode/3
(db_connection 2.9.0) lib/db_connection.ex:1470: DBConnection.decode/4
(db_connection 2.9.0) lib/db_connection.ex:849: DBConnection.execute/4
(ch 0.7.1) lib/ch.ex:94: Ch.query/4
(ecto_sql 3.13.4) lib/ecto/adapters/sql.ex:620: Ecto.Adapters.SQL.query!/4
(ecto_ch 0.8.6) lib/ecto/adapters/clickhouse.ex:323: Ecto.Adapters.ClickHouse.execute/5
(ecto 3.13.5) lib/ecto/repo/queryable.ex:241: Ecto.Repo.Queryable.execute/4