English 中文(简体)
New line delimited json file upload to bigquery error Code:3
原标题:
jobStatus: {
errorResult: {
code: 3
message: "Error while reading data, error message: JSON table encountered too many errors, giving up. Rows: 1127; errors: 1. Please look into the errors[] collection for more details."
}
errors: [
0: {
code: 3
message: "Error while reading data, error message: JSON table encountered too many errors, giving up. Rows: 1127; errors: 1. Please look into the errors[] collection for more details."
}
1: {
code: 3
message: "Error while reading data, error message: JSON processing encountered too many errors, giving up. Rows: 1127; errors: 1; max bad: 0; error percent: 0"
}
2: {
code: 3
message: "Error while reading data, error message: JSON parsing error in row starting at position 132601653: Parser terminated before end of string"
}]
jobState: "DONE"

This is line 1127 for the .json file

{"id":"xxx","ContextId":"xxx","dueAt":"2023-01-26T05:59:59.000Z","title":"1/25 MATH: *LOCATION CHANGE* Live Session @ 9:40 -  ZEARN M4 L 20","aos":null,"aos_user":null,"aos_ass":null,"aos_work":null,"aos_assId":null,"c_id":"xxx","c_accountId":"xxx","c_enrollmentTermId":"xxx","c_name":"MATH 5 - xxx - 02","c_courseWorkflowState":"available","e_id":"xxx","e_courseId":"xxx","e_courseSectionId":"xxx","cs_sisSourceId":"xxx~xxx","u_id":"xxx","u_name":"xxx xxx","edd_assignmentId":null,"edd_userId":null,"edd_effectiveDueDate":null,"et_id":"xxx","et_name":"2022/2023 S1 S2","et_startAt":"2022-08-04T05:01:00.000Z","et_endAt":"2023-05-23T04:59:00.000Z","et_valueGradingPeriodGroupId":"xxx","acc_id":"xxx","acc_name":"Elementary Math","acc_parentAccount":"xxx","really_dueAT":"2023-01-26T05:59:59Z","YearWeek":"202304","s_id":"xxx","s_excused":"false","s_assignmentId":"xxx","s_gradingPeriodId":"xxx","s_submission_type":"basic_lti_launch","s_workflowState":"graded","s_attempt":"1"}

column names and data were altered to protect data but the structure remains intact... any insight would be greatly appreciated....

I have many other jobs that are in the same realm as this.... this file is quite large 1.5 gigs but it s not a timeout error... so I am lost

问题回答

暂无回答




相关问题
Bigquery table join with specific conditions

I have this table A which contains | ID | Start Date | End Date | |:---- |:----------:| --------:| | 1 | 2020-03-01 |2020-03-02| | | 2020-05-01|2020-05-02| | 2 | 2020-06-01|2020-06-02| ...

SQL IN operator in Bigquery

Suppose I have two tables of devices, my table 1 and table 2. I need to get all the models that has band (in table). Looking for ways on to execute this in sql bigquery. Thanks for the answer Table 2 ...

页: 1

I am trying to split my Date column into 2 separate columns (DATE & TIME). Currently, the date column has the date with a time stamp, and I need to drop the time stamp or put it into another ...

Does Google BigQuery require a schema? [closed]

I want to use bigquery for storing logs. Does it require a fixed schema like Mysql and other RDBMS or it is like nosql where there is no schema?

How to Analyze and Query big chunks of data

I need to: 1. Analyze big files of http logs I m thinking in using mapreduce but I m not sure where to host it. Shall I use App Engine Mapper or EC2+MapReduce or simply use it in my VPS? Other ...

MapReduce in the cloud

Except for Amazon MapReduce, what other options do I have to process a large amount of data?

热门标签