())
.setEstimatedTotalBytesScanned(452788190)
+ .setEstimatedRowCount(-1745583577)
.setTraceId("traceId-1067401920")
.build();
mockBigQueryRead.addResponse(expectedResponse);
diff --git a/proto-google-cloud-bigquerystorage-v1/src/main/java/com/google/cloud/bigquery/storage/v1/AppendRowsRequest.java b/proto-google-cloud-bigquerystorage-v1/src/main/java/com/google/cloud/bigquery/storage/v1/AppendRowsRequest.java
index abc2eb3328..288b9e9058 100644
--- a/proto-google-cloud-bigquerystorage-v1/src/main/java/com/google/cloud/bigquery/storage/v1/AppendRowsRequest.java
+++ b/proto-google-cloud-bigquerystorage-v1/src/main/java/com/google/cloud/bigquery/storage/v1/AppendRowsRequest.java
@@ -1437,10 +1437,10 @@ public RowsCase getRowsCase() {
*
*
*
- * Required. The write_stream identifies the target of the append operation, and only
- * needs to be specified as part of the first request on the gRPC connection.
- * If provided for subsequent requests, it must match the value of the first
- * request.
+ * Required. The write_stream identifies the target of the append operation,
+ * and only needs to be specified as part of the first request on the gRPC
+ * connection. If provided for subsequent requests, it must match the value of
+ * the first request.
* For explicitly created write streams, the format is:
* * `projects/{project}/datasets/{dataset}/tables/{table}/streams/{id}`
* For the special default stream, the format is:
@@ -1469,10 +1469,10 @@ public java.lang.String getWriteStream() {
*
*
*
- * Required. The write_stream identifies the target of the append operation, and only
- * needs to be specified as part of the first request on the gRPC connection.
- * If provided for subsequent requests, it must match the value of the first
- * request.
+ * Required. The write_stream identifies the target of the append operation,
+ * and only needs to be specified as part of the first request on the gRPC
+ * connection. If provided for subsequent requests, it must match the value of
+ * the first request.
* For explicitly created write streams, the format is:
* * `projects/{project}/datasets/{dataset}/tables/{table}/streams/{id}`
* For the special default stream, the format is:
@@ -2506,10 +2506,10 @@ public Builder clearRows() {
*
*
*
- * Required. The write_stream identifies the target of the append operation, and only
- * needs to be specified as part of the first request on the gRPC connection.
- * If provided for subsequent requests, it must match the value of the first
- * request.
+ * Required. The write_stream identifies the target of the append operation,
+ * and only needs to be specified as part of the first request on the gRPC
+ * connection. If provided for subsequent requests, it must match the value of
+ * the first request.
* For explicitly created write streams, the format is:
* * `projects/{project}/datasets/{dataset}/tables/{table}/streams/{id}`
* For the special default stream, the format is:
@@ -2537,10 +2537,10 @@ public java.lang.String getWriteStream() {
*
*
*
- * Required. The write_stream identifies the target of the append operation, and only
- * needs to be specified as part of the first request on the gRPC connection.
- * If provided for subsequent requests, it must match the value of the first
- * request.
+ * Required. The write_stream identifies the target of the append operation,
+ * and only needs to be specified as part of the first request on the gRPC
+ * connection. If provided for subsequent requests, it must match the value of
+ * the first request.
* For explicitly created write streams, the format is:
* * `projects/{project}/datasets/{dataset}/tables/{table}/streams/{id}`
* For the special default stream, the format is:
@@ -2568,10 +2568,10 @@ public com.google.protobuf.ByteString getWriteStreamBytes() {
*
*
*
- * Required. The write_stream identifies the target of the append operation, and only
- * needs to be specified as part of the first request on the gRPC connection.
- * If provided for subsequent requests, it must match the value of the first
- * request.
+ * Required. The write_stream identifies the target of the append operation,
+ * and only needs to be specified as part of the first request on the gRPC
+ * connection. If provided for subsequent requests, it must match the value of
+ * the first request.
* For explicitly created write streams, the format is:
* * `projects/{project}/datasets/{dataset}/tables/{table}/streams/{id}`
* For the special default stream, the format is:
@@ -2598,10 +2598,10 @@ public Builder setWriteStream(java.lang.String value) {
*
*
*
- * Required. The write_stream identifies the target of the append operation, and only
- * needs to be specified as part of the first request on the gRPC connection.
- * If provided for subsequent requests, it must match the value of the first
- * request.
+ * Required. The write_stream identifies the target of the append operation,
+ * and only needs to be specified as part of the first request on the gRPC
+ * connection. If provided for subsequent requests, it must match the value of
+ * the first request.
* For explicitly created write streams, the format is:
* * `projects/{project}/datasets/{dataset}/tables/{table}/streams/{id}`
* For the special default stream, the format is:
@@ -2624,10 +2624,10 @@ public Builder clearWriteStream() {
*
*
*
- * Required. The write_stream identifies the target of the append operation, and only
- * needs to be specified as part of the first request on the gRPC connection.
- * If provided for subsequent requests, it must match the value of the first
- * request.
+ * Required. The write_stream identifies the target of the append operation,
+ * and only needs to be specified as part of the first request on the gRPC
+ * connection. If provided for subsequent requests, it must match the value of
+ * the first request.
* For explicitly created write streams, the format is:
* * `projects/{project}/datasets/{dataset}/tables/{table}/streams/{id}`
* For the special default stream, the format is:
diff --git a/proto-google-cloud-bigquerystorage-v1/src/main/java/com/google/cloud/bigquery/storage/v1/AppendRowsRequestOrBuilder.java b/proto-google-cloud-bigquerystorage-v1/src/main/java/com/google/cloud/bigquery/storage/v1/AppendRowsRequestOrBuilder.java
index 88a471f15c..1c0b44754f 100644
--- a/proto-google-cloud-bigquerystorage-v1/src/main/java/com/google/cloud/bigquery/storage/v1/AppendRowsRequestOrBuilder.java
+++ b/proto-google-cloud-bigquerystorage-v1/src/main/java/com/google/cloud/bigquery/storage/v1/AppendRowsRequestOrBuilder.java
@@ -27,10 +27,10 @@ public interface AppendRowsRequestOrBuilder
*
*
*
- * Required. The write_stream identifies the target of the append operation, and only
- * needs to be specified as part of the first request on the gRPC connection.
- * If provided for subsequent requests, it must match the value of the first
- * request.
+ * Required. The write_stream identifies the target of the append operation,
+ * and only needs to be specified as part of the first request on the gRPC
+ * connection. If provided for subsequent requests, it must match the value of
+ * the first request.
* For explicitly created write streams, the format is:
* * `projects/{project}/datasets/{dataset}/tables/{table}/streams/{id}`
* For the special default stream, the format is:
@@ -48,10 +48,10 @@ public interface AppendRowsRequestOrBuilder
*
*
*
- * Required. The write_stream identifies the target of the append operation, and only
- * needs to be specified as part of the first request on the gRPC connection.
- * If provided for subsequent requests, it must match the value of the first
- * request.
+ * Required. The write_stream identifies the target of the append operation,
+ * and only needs to be specified as part of the first request on the gRPC
+ * connection. If provided for subsequent requests, it must match the value of
+ * the first request.
* For explicitly created write streams, the format is:
* * `projects/{project}/datasets/{dataset}/tables/{table}/streams/{id}`
* For the special default stream, the format is:
diff --git a/proto-google-cloud-bigquerystorage-v1/src/main/java/com/google/cloud/bigquery/storage/v1/BatchCommitWriteStreamsRequest.java b/proto-google-cloud-bigquerystorage-v1/src/main/java/com/google/cloud/bigquery/storage/v1/BatchCommitWriteStreamsRequest.java
index 2fca0dbbd8..ce75b039eb 100644
--- a/proto-google-cloud-bigquerystorage-v1/src/main/java/com/google/cloud/bigquery/storage/v1/BatchCommitWriteStreamsRequest.java
+++ b/proto-google-cloud-bigquerystorage-v1/src/main/java/com/google/cloud/bigquery/storage/v1/BatchCommitWriteStreamsRequest.java
@@ -75,8 +75,8 @@ public static final com.google.protobuf.Descriptors.Descriptor getDescriptor() {
*
*
*
- * Required. Parent table that all the streams should belong to, in the form of
- * `projects/{project}/datasets/{dataset}/tables/{table}`.
+ * Required. Parent table that all the streams should belong to, in the form
+ * of `projects/{project}/datasets/{dataset}/tables/{table}`.
*
*
*
@@ -101,8 +101,8 @@ public java.lang.String getParent() {
*
*
*
- * Required. Parent table that all the streams should belong to, in the form of
- * `projects/{project}/datasets/{dataset}/tables/{table}`.
+ * Required. Parent table that all the streams should belong to, in the form
+ * of `projects/{project}/datasets/{dataset}/tables/{table}`.
*
*
*
@@ -571,8 +571,8 @@ public Builder mergeFrom(
*
*
*
- * Required. Parent table that all the streams should belong to, in the form of
- * `projects/{project}/datasets/{dataset}/tables/{table}`.
+ * Required. Parent table that all the streams should belong to, in the form
+ * of `projects/{project}/datasets/{dataset}/tables/{table}`.
*
*
*
@@ -596,8 +596,8 @@ public java.lang.String getParent() {
*
*
*
- * Required. Parent table that all the streams should belong to, in the form of
- * `projects/{project}/datasets/{dataset}/tables/{table}`.
+ * Required. Parent table that all the streams should belong to, in the form
+ * of `projects/{project}/datasets/{dataset}/tables/{table}`.
*
*
*
@@ -621,8 +621,8 @@ public com.google.protobuf.ByteString getParentBytes() {
*
*
*
- * Required. Parent table that all the streams should belong to, in the form of
- * `projects/{project}/datasets/{dataset}/tables/{table}`.
+ * Required. Parent table that all the streams should belong to, in the form
+ * of `projects/{project}/datasets/{dataset}/tables/{table}`.
*
*
*
@@ -645,8 +645,8 @@ public Builder setParent(java.lang.String value) {
*
*
*
- * Required. Parent table that all the streams should belong to, in the form of
- * `projects/{project}/datasets/{dataset}/tables/{table}`.
+ * Required. Parent table that all the streams should belong to, in the form
+ * of `projects/{project}/datasets/{dataset}/tables/{table}`.
*
*
*
@@ -665,8 +665,8 @@ public Builder clearParent() {
*
*
*
- * Required. Parent table that all the streams should belong to, in the form of
- * `projects/{project}/datasets/{dataset}/tables/{table}`.
+ * Required. Parent table that all the streams should belong to, in the form
+ * of `projects/{project}/datasets/{dataset}/tables/{table}`.
*
*
*
diff --git a/proto-google-cloud-bigquerystorage-v1/src/main/java/com/google/cloud/bigquery/storage/v1/BatchCommitWriteStreamsRequestOrBuilder.java b/proto-google-cloud-bigquerystorage-v1/src/main/java/com/google/cloud/bigquery/storage/v1/BatchCommitWriteStreamsRequestOrBuilder.java
index a8408f4dac..101831d8ab 100644
--- a/proto-google-cloud-bigquerystorage-v1/src/main/java/com/google/cloud/bigquery/storage/v1/BatchCommitWriteStreamsRequestOrBuilder.java
+++ b/proto-google-cloud-bigquerystorage-v1/src/main/java/com/google/cloud/bigquery/storage/v1/BatchCommitWriteStreamsRequestOrBuilder.java
@@ -27,8 +27,8 @@ public interface BatchCommitWriteStreamsRequestOrBuilder
*
*
*
- * Required. Parent table that all the streams should belong to, in the form of
- * `projects/{project}/datasets/{dataset}/tables/{table}`.
+ * Required. Parent table that all the streams should belong to, in the form
+ * of `projects/{project}/datasets/{dataset}/tables/{table}`.
*
*
*
@@ -42,8 +42,8 @@ public interface BatchCommitWriteStreamsRequestOrBuilder
*
*
*
- * Required. Parent table that all the streams should belong to, in the form of
- * `projects/{project}/datasets/{dataset}/tables/{table}`.
+ * Required. Parent table that all the streams should belong to, in the form
+ * of `projects/{project}/datasets/{dataset}/tables/{table}`.
*
*
*
diff --git a/proto-google-cloud-bigquerystorage-v1/src/main/java/com/google/cloud/bigquery/storage/v1/ReadSession.java b/proto-google-cloud-bigquerystorage-v1/src/main/java/com/google/cloud/bigquery/storage/v1/ReadSession.java
index 0ad4a1e364..cf54ca88de 100644
--- a/proto-google-cloud-bigquerystorage-v1/src/main/java/com/google/cloud/bigquery/storage/v1/ReadSession.java
+++ b/proto-google-cloud-bigquerystorage-v1/src/main/java/com/google/cloud/bigquery/storage/v1/ReadSession.java
@@ -3452,9 +3452,10 @@ public com.google.protobuf.ByteString getNameBytes() {
*
*
*
- * Output only. Time at which the session becomes invalid. After this time, subsequent
- * requests to read this Session will return errors. The expire_time is
- * automatically assigned and currently cannot be specified or updated.
+ * Output only. Time at which the session becomes invalid. After this time,
+ * subsequent requests to read this Session will return errors. The
+ * expire_time is automatically assigned and currently cannot be specified or
+ * updated.
*
*
* .google.protobuf.Timestamp expire_time = 2 [(.google.api.field_behavior) = OUTPUT_ONLY];
@@ -3470,9 +3471,10 @@ public boolean hasExpireTime() {
*
*
*
- * Output only. Time at which the session becomes invalid. After this time, subsequent
- * requests to read this Session will return errors. The expire_time is
- * automatically assigned and currently cannot be specified or updated.
+ * Output only. Time at which the session becomes invalid. After this time,
+ * subsequent requests to read this Session will return errors. The
+ * expire_time is automatically assigned and currently cannot be specified or
+ * updated.
*
*
* .google.protobuf.Timestamp expire_time = 2 [(.google.api.field_behavior) = OUTPUT_ONLY];
@@ -3488,9 +3490,10 @@ public com.google.protobuf.Timestamp getExpireTime() {
*
*
*
- * Output only. Time at which the session becomes invalid. After this time, subsequent
- * requests to read this Session will return errors. The expire_time is
- * automatically assigned and currently cannot be specified or updated.
+ * Output only. Time at which the session becomes invalid. After this time,
+ * subsequent requests to read this Session will return errors. The
+ * expire_time is automatically assigned and currently cannot be specified or
+ * updated.
*
*
* .google.protobuf.Timestamp expire_time = 2 [(.google.api.field_behavior) = OUTPUT_ONLY];
@@ -3507,7 +3510,8 @@ public com.google.protobuf.TimestampOrBuilder getExpireTimeOrBuilder() {
*
*
*
- * Immutable. Data format of the output data. DATA_FORMAT_UNSPECIFIED not supported.
+ * Immutable. Data format of the output data. DATA_FORMAT_UNSPECIFIED not
+ * supported.
*
*
*
@@ -3524,7 +3528,8 @@ public int getDataFormatValue() {
*
*
*
- * Immutable. Data format of the output data. DATA_FORMAT_UNSPECIFIED not supported.
+ * Immutable. Data format of the output data. DATA_FORMAT_UNSPECIFIED not
+ * supported.
*
*
*
@@ -3716,7 +3721,8 @@ public com.google.protobuf.ByteString getTableBytes() {
*
*
*
- * Optional. Any modifiers which are applied when reading from the specified table.
+ * Optional. Any modifiers which are applied when reading from the specified
+ * table.
*
*
*
@@ -3733,7 +3739,8 @@ public boolean hasTableModifiers() {
*
*
*
- * Optional. Any modifiers which are applied when reading from the specified table.
+ * Optional. Any modifiers which are applied when reading from the specified
+ * table.
*
*
*
@@ -3752,7 +3759,8 @@ public com.google.cloud.bigquery.storage.v1.ReadSession.TableModifiers getTableM
*
*
*
- * Optional. Any modifiers which are applied when reading from the specified table.
+ * Optional. Any modifiers which are applied when reading from the specified
+ * table.
*
*
*
@@ -3940,14 +3948,34 @@ public long getEstimatedTotalBytesScanned() {
return estimatedTotalBytesScanned_;
}
+ public static final int ESTIMATED_ROW_COUNT_FIELD_NUMBER = 14;
+ private long estimatedRowCount_;
+ /**
+ *
+ *
+ *
+ * Output only. An estimate on the number of rows present in this session's
+ * streams. This estimate is based on metadata from the table which might be
+ * incomplete or stale.
+ *
+ *
+ * int64 estimated_row_count = 14 [(.google.api.field_behavior) = OUTPUT_ONLY];
+ *
+ * @return The estimatedRowCount.
+ */
+ @java.lang.Override
+ public long getEstimatedRowCount() {
+ return estimatedRowCount_;
+ }
+
public static final int TRACE_ID_FIELD_NUMBER = 13;
private volatile java.lang.Object traceId_;
/**
*
*
*
- * Optional. ID set by client to annotate a session identity. This does not need
- * to be strictly unique, but instead the same ID should be used to group
+ * Optional. ID set by client to annotate a session identity. This does not
+ * need to be strictly unique, but instead the same ID should be used to group
* logically connected sessions (e.g. All using the same ID for all sessions
* needed to complete a Spark SQL query is reasonable).
* Maximum length is 256 bytes.
@@ -3973,8 +4001,8 @@ public java.lang.String getTraceId() {
*
*
*
- * Optional. ID set by client to annotate a session identity. This does not need
- * to be strictly unique, but instead the same ID should be used to group
+ * Optional. ID set by client to annotate a session identity. This does not
+ * need to be strictly unique, but instead the same ID should be used to group
* logically connected sessions (e.g. All using the same ID for all sessions
* needed to complete a Spark SQL query is reasonable).
* Maximum length is 256 bytes.
@@ -4045,6 +4073,9 @@ public void writeTo(com.google.protobuf.CodedOutputStream output) throws java.io
if (!com.google.protobuf.GeneratedMessageV3.isStringEmpty(traceId_)) {
com.google.protobuf.GeneratedMessageV3.writeString(output, 13, traceId_);
}
+ if (estimatedRowCount_ != 0L) {
+ output.writeInt64(14, estimatedRowCount_);
+ }
getUnknownFields().writeTo(output);
}
@@ -4093,6 +4124,9 @@ public int getSerializedSize() {
if (!com.google.protobuf.GeneratedMessageV3.isStringEmpty(traceId_)) {
size += com.google.protobuf.GeneratedMessageV3.computeStringSize(13, traceId_);
}
+ if (estimatedRowCount_ != 0L) {
+ size += com.google.protobuf.CodedOutputStream.computeInt64Size(14, estimatedRowCount_);
+ }
size += getUnknownFields().getSerializedSize();
memoizedSize = size;
return size;
@@ -4126,6 +4160,7 @@ public boolean equals(final java.lang.Object obj) {
}
if (!getStreamsList().equals(other.getStreamsList())) return false;
if (getEstimatedTotalBytesScanned() != other.getEstimatedTotalBytesScanned()) return false;
+ if (getEstimatedRowCount() != other.getEstimatedRowCount()) return false;
if (!getTraceId().equals(other.getTraceId())) return false;
if (!getSchemaCase().equals(other.getSchemaCase())) return false;
switch (schemaCase_) {
@@ -4173,6 +4208,8 @@ public int hashCode() {
}
hash = (37 * hash) + ESTIMATED_TOTAL_BYTES_SCANNED_FIELD_NUMBER;
hash = (53 * hash) + com.google.protobuf.Internal.hashLong(getEstimatedTotalBytesScanned());
+ hash = (37 * hash) + ESTIMATED_ROW_COUNT_FIELD_NUMBER;
+ hash = (53 * hash) + com.google.protobuf.Internal.hashLong(getEstimatedRowCount());
hash = (37 * hash) + TRACE_ID_FIELD_NUMBER;
hash = (53 * hash) + getTraceId().hashCode();
switch (schemaCase_) {
@@ -4364,6 +4401,8 @@ public Builder clear() {
bitField0_ = (bitField0_ & ~0x00000001);
estimatedTotalBytesScanned_ = 0L;
+ estimatedRowCount_ = 0L;
+
traceId_ = "";
schemaCase_ = 0;
@@ -4438,6 +4477,7 @@ public com.google.cloud.bigquery.storage.v1.ReadSession buildPartial() {
result.streams_ = streamsBuilder_.build();
}
result.estimatedTotalBytesScanned_ = estimatedTotalBytesScanned_;
+ result.estimatedRowCount_ = estimatedRowCount_;
result.traceId_ = traceId_;
result.schemaCase_ = schemaCase_;
onBuilt();
@@ -4540,6 +4580,9 @@ public Builder mergeFrom(com.google.cloud.bigquery.storage.v1.ReadSession other)
if (other.getEstimatedTotalBytesScanned() != 0L) {
setEstimatedTotalBytesScanned(other.getEstimatedTotalBytesScanned());
}
+ if (other.getEstimatedRowCount() != 0L) {
+ setEstimatedRowCount(other.getEstimatedRowCount());
+ }
if (!other.getTraceId().isEmpty()) {
traceId_ = other.traceId_;
onChanged();
@@ -4660,6 +4703,12 @@ public Builder mergeFrom(
break;
} // case 106
+ case 112:
+ {
+ estimatedRowCount_ = input.readInt64();
+
+ break;
+ } // case 112
default:
{
if (!super.parseUnknownField(input, extensionRegistry, tag)) {
@@ -4814,9 +4863,10 @@ public Builder setNameBytes(com.google.protobuf.ByteString value) {
*
*
*
- * Output only. Time at which the session becomes invalid. After this time, subsequent
- * requests to read this Session will return errors. The expire_time is
- * automatically assigned and currently cannot be specified or updated.
+ * Output only. Time at which the session becomes invalid. After this time,
+ * subsequent requests to read this Session will return errors. The
+ * expire_time is automatically assigned and currently cannot be specified or
+ * updated.
*
*
*
@@ -4832,9 +4882,10 @@ public boolean hasExpireTime() {
*
*
*
- * Output only. Time at which the session becomes invalid. After this time, subsequent
- * requests to read this Session will return errors. The expire_time is
- * automatically assigned and currently cannot be specified or updated.
+ * Output only. Time at which the session becomes invalid. After this time,
+ * subsequent requests to read this Session will return errors. The
+ * expire_time is automatically assigned and currently cannot be specified or
+ * updated.
*
*
*
@@ -4856,9 +4907,10 @@ public com.google.protobuf.Timestamp getExpireTime() {
*
*
*
- * Output only. Time at which the session becomes invalid. After this time, subsequent
- * requests to read this Session will return errors. The expire_time is
- * automatically assigned and currently cannot be specified or updated.
+ * Output only. Time at which the session becomes invalid. After this time,
+ * subsequent requests to read this Session will return errors. The
+ * expire_time is automatically assigned and currently cannot be specified or
+ * updated.
*
*
*
@@ -4882,9 +4934,10 @@ public Builder setExpireTime(com.google.protobuf.Timestamp value) {
*
*
*
- * Output only. Time at which the session becomes invalid. After this time, subsequent
- * requests to read this Session will return errors. The expire_time is
- * automatically assigned and currently cannot be specified or updated.
+ * Output only. Time at which the session becomes invalid. After this time,
+ * subsequent requests to read this Session will return errors. The
+ * expire_time is automatically assigned and currently cannot be specified or
+ * updated.
*
*
*
@@ -4905,9 +4958,10 @@ public Builder setExpireTime(com.google.protobuf.Timestamp.Builder builderForVal
*
*
*
- * Output only. Time at which the session becomes invalid. After this time, subsequent
- * requests to read this Session will return errors. The expire_time is
- * automatically assigned and currently cannot be specified or updated.
+ * Output only. Time at which the session becomes invalid. After this time,
+ * subsequent requests to read this Session will return errors. The
+ * expire_time is automatically assigned and currently cannot be specified or
+ * updated.
*
*
*
@@ -4933,9 +4987,10 @@ public Builder mergeExpireTime(com.google.protobuf.Timestamp value) {
*
*
*
- * Output only. Time at which the session becomes invalid. After this time, subsequent
- * requests to read this Session will return errors. The expire_time is
- * automatically assigned and currently cannot be specified or updated.
+ * Output only. Time at which the session becomes invalid. After this time,
+ * subsequent requests to read this Session will return errors. The
+ * expire_time is automatically assigned and currently cannot be specified or
+ * updated.
*
*
*
@@ -4957,9 +5012,10 @@ public Builder clearExpireTime() {
*
*
*
- * Output only. Time at which the session becomes invalid. After this time, subsequent
- * requests to read this Session will return errors. The expire_time is
- * automatically assigned and currently cannot be specified or updated.
+ * Output only. Time at which the session becomes invalid. After this time,
+ * subsequent requests to read this Session will return errors. The
+ * expire_time is automatically assigned and currently cannot be specified or
+ * updated.
*
*
*
@@ -4975,9 +5031,10 @@ public com.google.protobuf.Timestamp.Builder getExpireTimeBuilder() {
*
*
*
- * Output only. Time at which the session becomes invalid. After this time, subsequent
- * requests to read this Session will return errors. The expire_time is
- * automatically assigned and currently cannot be specified or updated.
+ * Output only. Time at which the session becomes invalid. After this time,
+ * subsequent requests to read this Session will return errors. The
+ * expire_time is automatically assigned and currently cannot be specified or
+ * updated.
*
*
*
@@ -4997,9 +5054,10 @@ public com.google.protobuf.TimestampOrBuilder getExpireTimeOrBuilder() {
*
*
*
- * Output only. Time at which the session becomes invalid. After this time, subsequent
- * requests to read this Session will return errors. The expire_time is
- * automatically assigned and currently cannot be specified or updated.
+ * Output only. Time at which the session becomes invalid. After this time,
+ * subsequent requests to read this Session will return errors. The
+ * expire_time is automatically assigned and currently cannot be specified or
+ * updated.
*
*
*
@@ -5028,7 +5086,8 @@ public com.google.protobuf.TimestampOrBuilder getExpireTimeOrBuilder() {
*
*
*
- * Immutable. Data format of the output data. DATA_FORMAT_UNSPECIFIED not supported.
+ * Immutable. Data format of the output data. DATA_FORMAT_UNSPECIFIED not
+ * supported.
*
*
*
@@ -5045,7 +5104,8 @@ public int getDataFormatValue() {
*
*
*
- * Immutable. Data format of the output data. DATA_FORMAT_UNSPECIFIED not supported.
+ * Immutable. Data format of the output data. DATA_FORMAT_UNSPECIFIED not
+ * supported.
*
*
*
@@ -5065,7 +5125,8 @@ public Builder setDataFormatValue(int value) {
*
*
*
- * Immutable. Data format of the output data. DATA_FORMAT_UNSPECIFIED not supported.
+ * Immutable. Data format of the output data. DATA_FORMAT_UNSPECIFIED not
+ * supported.
*
*
*
@@ -5085,7 +5146,8 @@ public com.google.cloud.bigquery.storage.v1.DataFormat getDataFormat() {
*
*
*
- * Immutable. Data format of the output data. DATA_FORMAT_UNSPECIFIED not supported.
+ * Immutable. Data format of the output data. DATA_FORMAT_UNSPECIFIED not
+ * supported.
*
*
*
@@ -5108,7 +5170,8 @@ public Builder setDataFormat(com.google.cloud.bigquery.storage.v1.DataFormat val
*
*
*
- * Immutable. Data format of the output data. DATA_FORMAT_UNSPECIFIED not supported.
+ * Immutable. Data format of the output data. DATA_FORMAT_UNSPECIFIED not
+ * supported.
*
*
*
@@ -5711,7 +5774,8 @@ public Builder setTableBytes(com.google.protobuf.ByteString value) {
*
*
*
- * Optional. Any modifiers which are applied when reading from the specified table.
+ * Optional. Any modifiers which are applied when reading from the specified
+ * table.
*
*
*
@@ -5727,7 +5791,8 @@ public boolean hasTableModifiers() {
*
*
*
- * Optional. Any modifiers which are applied when reading from the specified table.
+ * Optional. Any modifiers which are applied when reading from the specified
+ * table.
*
*
*
@@ -5749,7 +5814,8 @@ public com.google.cloud.bigquery.storage.v1.ReadSession.TableModifiers getTableM
*
*
*
- * Optional. Any modifiers which are applied when reading from the specified table.
+ * Optional. Any modifiers which are applied when reading from the specified
+ * table.
*
*
*
@@ -5774,7 +5840,8 @@ public Builder setTableModifiers(
*
*
*
- * Optional. Any modifiers which are applied when reading from the specified table.
+ * Optional. Any modifiers which are applied when reading from the specified
+ * table.
*
*
*
@@ -5796,7 +5863,8 @@ public Builder setTableModifiers(
*
*
*
- * Optional. Any modifiers which are applied when reading from the specified table.
+ * Optional. Any modifiers which are applied when reading from the specified
+ * table.
*
*
*
@@ -5826,7 +5894,8 @@ public Builder mergeTableModifiers(
*
*
*
- * Optional. Any modifiers which are applied when reading from the specified table.
+ * Optional. Any modifiers which are applied when reading from the specified
+ * table.
*
*
*
@@ -5848,7 +5917,8 @@ public Builder clearTableModifiers() {
*
*
*
- * Optional. Any modifiers which are applied when reading from the specified table.
+ * Optional. Any modifiers which are applied when reading from the specified
+ * table.
*
*
*
@@ -5865,7 +5935,8 @@ public Builder clearTableModifiers() {
*
*
*
- * Optional. Any modifiers which are applied when reading from the specified table.
+ * Optional. Any modifiers which are applied when reading from the specified
+ * table.
*
*
*
@@ -5886,7 +5957,8 @@ public Builder clearTableModifiers() {
*
*
*
- * Optional. Any modifiers which are applied when reading from the specified table.
+ * Optional. Any modifiers which are applied when reading from the specified
+ * table.
*
*
*
@@ -6639,13 +6711,71 @@ public Builder clearEstimatedTotalBytesScanned() {
return this;
}
+ private long estimatedRowCount_;
+ /**
+ *
+ *
+ *
+ * Output only. An estimate on the number of rows present in this session's
+ * streams. This estimate is based on metadata from the table which might be
+ * incomplete or stale.
+ *
+ *
+ * int64 estimated_row_count = 14 [(.google.api.field_behavior) = OUTPUT_ONLY];
+ *
+ * @return The estimatedRowCount.
+ */
+ @java.lang.Override
+ public long getEstimatedRowCount() {
+ return estimatedRowCount_;
+ }
+ /**
+ *
+ *
+ *
+ * Output only. An estimate on the number of rows present in this session's
+ * streams. This estimate is based on metadata from the table which might be
+ * incomplete or stale.
+ *
+ *
+ * int64 estimated_row_count = 14 [(.google.api.field_behavior) = OUTPUT_ONLY];
+ *
+ * @param value The estimatedRowCount to set.
+ * @return This builder for chaining.
+ */
+ public Builder setEstimatedRowCount(long value) {
+
+ estimatedRowCount_ = value;
+ onChanged();
+ return this;
+ }
+ /**
+ *
+ *
+ *
+ * Output only. An estimate on the number of rows present in this session's
+ * streams. This estimate is based on metadata from the table which might be
+ * incomplete or stale.
+ *
+ *
+ * int64 estimated_row_count = 14 [(.google.api.field_behavior) = OUTPUT_ONLY];
+ *
+ * @return This builder for chaining.
+ */
+ public Builder clearEstimatedRowCount() {
+
+ estimatedRowCount_ = 0L;
+ onChanged();
+ return this;
+ }
+
private java.lang.Object traceId_ = "";
/**
*
*
*
- * Optional. ID set by client to annotate a session identity. This does not need
- * to be strictly unique, but instead the same ID should be used to group
+ * Optional. ID set by client to annotate a session identity. This does not
+ * need to be strictly unique, but instead the same ID should be used to group
* logically connected sessions (e.g. All using the same ID for all sessions
* needed to complete a Spark SQL query is reasonable).
* Maximum length is 256 bytes.
@@ -6670,8 +6800,8 @@ public java.lang.String getTraceId() {
*
*
*
- * Optional. ID set by client to annotate a session identity. This does not need
- * to be strictly unique, but instead the same ID should be used to group
+ * Optional. ID set by client to annotate a session identity. This does not
+ * need to be strictly unique, but instead the same ID should be used to group
* logically connected sessions (e.g. All using the same ID for all sessions
* needed to complete a Spark SQL query is reasonable).
* Maximum length is 256 bytes.
@@ -6696,8 +6826,8 @@ public com.google.protobuf.ByteString getTraceIdBytes() {
*
*
*
- * Optional. ID set by client to annotate a session identity. This does not need
- * to be strictly unique, but instead the same ID should be used to group
+ * Optional. ID set by client to annotate a session identity. This does not
+ * need to be strictly unique, but instead the same ID should be used to group
* logically connected sessions (e.g. All using the same ID for all sessions
* needed to complete a Spark SQL query is reasonable).
* Maximum length is 256 bytes.
@@ -6721,8 +6851,8 @@ public Builder setTraceId(java.lang.String value) {
*
*
*
- * Optional. ID set by client to annotate a session identity. This does not need
- * to be strictly unique, but instead the same ID should be used to group
+ * Optional. ID set by client to annotate a session identity. This does not
+ * need to be strictly unique, but instead the same ID should be used to group
* logically connected sessions (e.g. All using the same ID for all sessions
* needed to complete a Spark SQL query is reasonable).
* Maximum length is 256 bytes.
@@ -6742,8 +6872,8 @@ public Builder clearTraceId() {
*
*
*
- * Optional. ID set by client to annotate a session identity. This does not need
- * to be strictly unique, but instead the same ID should be used to group
+ * Optional. ID set by client to annotate a session identity. This does not
+ * need to be strictly unique, but instead the same ID should be used to group
* logically connected sessions (e.g. All using the same ID for all sessions
* needed to complete a Spark SQL query is reasonable).
* Maximum length is 256 bytes.
diff --git a/proto-google-cloud-bigquerystorage-v1/src/main/java/com/google/cloud/bigquery/storage/v1/ReadSessionOrBuilder.java b/proto-google-cloud-bigquerystorage-v1/src/main/java/com/google/cloud/bigquery/storage/v1/ReadSessionOrBuilder.java
index 58569f3706..dcc3148bb5 100644
--- a/proto-google-cloud-bigquerystorage-v1/src/main/java/com/google/cloud/bigquery/storage/v1/ReadSessionOrBuilder.java
+++ b/proto-google-cloud-bigquerystorage-v1/src/main/java/com/google/cloud/bigquery/storage/v1/ReadSessionOrBuilder.java
@@ -54,9 +54,10 @@ public interface ReadSessionOrBuilder
*
*
*
- * Output only. Time at which the session becomes invalid. After this time, subsequent
- * requests to read this Session will return errors. The expire_time is
- * automatically assigned and currently cannot be specified or updated.
+ * Output only. Time at which the session becomes invalid. After this time,
+ * subsequent requests to read this Session will return errors. The
+ * expire_time is automatically assigned and currently cannot be specified or
+ * updated.
*
*
* .google.protobuf.Timestamp expire_time = 2 [(.google.api.field_behavior) = OUTPUT_ONLY];
@@ -69,9 +70,10 @@ public interface ReadSessionOrBuilder
*
*
*
- * Output only. Time at which the session becomes invalid. After this time, subsequent
- * requests to read this Session will return errors. The expire_time is
- * automatically assigned and currently cannot be specified or updated.
+ * Output only. Time at which the session becomes invalid. After this time,
+ * subsequent requests to read this Session will return errors. The
+ * expire_time is automatically assigned and currently cannot be specified or
+ * updated.
*
*
* .google.protobuf.Timestamp expire_time = 2 [(.google.api.field_behavior) = OUTPUT_ONLY];
@@ -84,9 +86,10 @@ public interface ReadSessionOrBuilder
*
*
*
- * Output only. Time at which the session becomes invalid. After this time, subsequent
- * requests to read this Session will return errors. The expire_time is
- * automatically assigned and currently cannot be specified or updated.
+ * Output only. Time at which the session becomes invalid. After this time,
+ * subsequent requests to read this Session will return errors. The
+ * expire_time is automatically assigned and currently cannot be specified or
+ * updated.
*
*
* .google.protobuf.Timestamp expire_time = 2 [(.google.api.field_behavior) = OUTPUT_ONLY];
@@ -98,7 +101,8 @@ public interface ReadSessionOrBuilder
*
*
*
- * Immutable. Data format of the output data. DATA_FORMAT_UNSPECIFIED not supported.
+ * Immutable. Data format of the output data. DATA_FORMAT_UNSPECIFIED not
+ * supported.
*
*
*
@@ -112,7 +116,8 @@ public interface ReadSessionOrBuilder
*
*
*
- * Immutable. Data format of the output data. DATA_FORMAT_UNSPECIFIED not supported.
+ * Immutable. Data format of the output data. DATA_FORMAT_UNSPECIFIED not
+ * supported.
*
*
*
@@ -240,7 +245,8 @@ public interface ReadSessionOrBuilder
*
*
*
- * Optional. Any modifiers which are applied when reading from the specified table.
+ * Optional. Any modifiers which are applied when reading from the specified
+ * table.
*
*
*
@@ -254,7 +260,8 @@ public interface ReadSessionOrBuilder
*
*
*
- * Optional. Any modifiers which are applied when reading from the specified table.
+ * Optional. Any modifiers which are applied when reading from the specified
+ * table.
*
*
*
@@ -268,7 +275,8 @@ public interface ReadSessionOrBuilder
*
*
*
- * Optional. Any modifiers which are applied when reading from the specified table.
+ * Optional. Any modifiers which are applied when reading from the specified
+ * table.
*
*
*
@@ -422,8 +430,23 @@ public interface ReadSessionOrBuilder
*
*
*
- * Optional. ID set by client to annotate a session identity. This does not need
- * to be strictly unique, but instead the same ID should be used to group
+ * Output only. An estimate on the number of rows present in this session's
+ * streams. This estimate is based on metadata from the table which might be
+ * incomplete or stale.
+ *
+ *
+ * int64 estimated_row_count = 14 [(.google.api.field_behavior) = OUTPUT_ONLY];
+ *
+ * @return The estimatedRowCount.
+ */
+ long getEstimatedRowCount();
+
+ /**
+ *
+ *
+ *
+ * Optional. ID set by client to annotate a session identity. This does not
+ * need to be strictly unique, but instead the same ID should be used to group
* logically connected sessions (e.g. All using the same ID for all sessions
* needed to complete a Spark SQL query is reasonable).
* Maximum length is 256 bytes.
@@ -438,8 +461,8 @@ public interface ReadSessionOrBuilder
*
*
*
- * Optional. ID set by client to annotate a session identity. This does not need
- * to be strictly unique, but instead the same ID should be used to group
+ * Optional. ID set by client to annotate a session identity. This does not
+ * need to be strictly unique, but instead the same ID should be used to group
* logically connected sessions (e.g. All using the same ID for all sessions
* needed to complete a Spark SQL query is reasonable).
* Maximum length is 256 bytes.
diff --git a/proto-google-cloud-bigquerystorage-v1/src/main/java/com/google/cloud/bigquery/storage/v1/StreamProto.java b/proto-google-cloud-bigquerystorage-v1/src/main/java/com/google/cloud/bigquery/storage/v1/StreamProto.java
index ab597149fa..ba33e9e939 100644
--- a/proto-google-cloud-bigquerystorage-v1/src/main/java/com/google/cloud/bigquery/storage/v1/StreamProto.java
+++ b/proto-google-cloud-bigquerystorage-v1/src/main/java/com/google/cloud/bigquery/storage/v1/StreamProto.java
@@ -63,7 +63,7 @@ public static com.google.protobuf.Descriptors.FileDescriptor getDescriptor() {
+ "uery/storage/v1/arrow.proto\032+google/clou"
+ "d/bigquery/storage/v1/avro.proto\032,google"
+ "/cloud/bigquery/storage/v1/table.proto\032\037"
- + "google/protobuf/timestamp.proto\"\242\t\n\013Read"
+ + "google/protobuf/timestamp.proto\"\304\t\n\013Read"
+ "Session\022\021\n\004name\030\001 \001(\tB\003\340A\003\0224\n\013expire_tim"
+ "e\030\002 \001(\0132\032.google.protobuf.TimestampB\003\340A\003"
+ "\022F\n\013data_format\030\003 \001(\0162,.google.cloud.big"
@@ -80,49 +80,49 @@ public static com.google.protobuf.Descriptors.FileDescriptor getDescriptor() {
+ "bleReadOptionsB\003\340A\001\022B\n\007streams\030\n \003(\0132,.g"
+ "oogle.cloud.bigquery.storage.v1.ReadStre"
+ "amB\003\340A\003\022*\n\035estimated_total_bytes_scanned"
- + "\030\014 \001(\003B\003\340A\003\022\025\n\010trace_id\030\r \001(\tB\003\340A\001\032C\n\016Ta"
- + "bleModifiers\0221\n\rsnapshot_time\030\001 \001(\0132\032.go"
- + "ogle.protobuf.Timestamp\032\273\002\n\020TableReadOpt"
- + "ions\022\027\n\017selected_fields\030\001 \003(\t\022\027\n\017row_res"
- + "triction\030\002 \001(\t\022g\n\033arrow_serialization_op"
- + "tions\030\003 \001(\0132;.google.cloud.bigquery.stor"
- + "age.v1.ArrowSerializationOptionsB\003\340A\001H\000\022"
- + "e\n\032avro_serialization_options\030\004 \001(\0132:.go"
- + "ogle.cloud.bigquery.storage.v1.AvroSeria"
- + "lizationOptionsB\003\340A\001H\000B%\n#output_format_"
- + "serialization_options:k\352Ah\n*bigquerystor"
- + "age.googleapis.com/ReadSession\022:projects"
- + "/{project}/locations/{location}/sessions"
- + "/{session}B\010\n\006schema\"\234\001\n\nReadStream\022\021\n\004n"
- + "ame\030\001 \001(\tB\003\340A\003:{\352Ax\n)bigquerystorage.goo"
- + "gleapis.com/ReadStream\022Kprojects/{projec"
- + "t}/locations/{location}/sessions/{sessio"
- + "n}/streams/{stream}\"\373\004\n\013WriteStream\022\021\n\004n"
- + "ame\030\001 \001(\tB\003\340A\003\022E\n\004type\030\002 \001(\01622.google.cl"
- + "oud.bigquery.storage.v1.WriteStream.Type"
- + "B\003\340A\005\0224\n\013create_time\030\003 \001(\0132\032.google.prot"
- + "obuf.TimestampB\003\340A\003\0224\n\013commit_time\030\004 \001(\013"
- + "2\032.google.protobuf.TimestampB\003\340A\003\022H\n\014tab"
- + "le_schema\030\005 \001(\0132-.google.cloud.bigquery."
- + "storage.v1.TableSchemaB\003\340A\003\022P\n\nwrite_mod"
- + "e\030\007 \001(\01627.google.cloud.bigquery.storage."
- + "v1.WriteStream.WriteModeB\003\340A\005\022\025\n\010locatio"
- + "n\030\010 \001(\tB\003\340A\005\"F\n\004Type\022\024\n\020TYPE_UNSPECIFIED"
- + "\020\000\022\r\n\tCOMMITTED\020\001\022\013\n\007PENDING\020\002\022\014\n\010BUFFER"
- + "ED\020\003\"3\n\tWriteMode\022\032\n\026WRITE_MODE_UNSPECIF"
- + "IED\020\000\022\n\n\006INSERT\020\001:v\352As\n*bigquerystorage."
- + "googleapis.com/WriteStream\022Eprojects/{pr"
- + "oject}/datasets/{dataset}/tables/{table}"
- + "/streams/{stream}*>\n\nDataFormat\022\033\n\027DATA_"
- + "FORMAT_UNSPECIFIED\020\000\022\010\n\004AVRO\020\001\022\t\n\005ARROW\020"
- + "\002*I\n\017WriteStreamView\022!\n\035WRITE_STREAM_VIE"
- + "W_UNSPECIFIED\020\000\022\t\n\005BASIC\020\001\022\010\n\004FULL\020\002B\304\001\n"
- + "$com.google.cloud.bigquery.storage.v1B\013S"
- + "treamProtoP\001ZGgoogle.golang.org/genproto"
- + "/googleapis/cloud/bigquery/storage/v1;st"
- + "orage\252\002 Google.Cloud.BigQuery.Storage.V1"
- + "\312\002 Google\\Cloud\\BigQuery\\Storage\\V1b\006pro"
- + "to3"
+ + "\030\014 \001(\003B\003\340A\003\022 \n\023estimated_row_count\030\016 \001(\003"
+ + "B\003\340A\003\022\025\n\010trace_id\030\r \001(\tB\003\340A\001\032C\n\016TableMod"
+ + "ifiers\0221\n\rsnapshot_time\030\001 \001(\0132\032.google.p"
+ + "rotobuf.Timestamp\032\273\002\n\020TableReadOptions\022\027"
+ + "\n\017selected_fields\030\001 \003(\t\022\027\n\017row_restricti"
+ + "on\030\002 \001(\t\022g\n\033arrow_serialization_options\030"
+ + "\003 \001(\0132;.google.cloud.bigquery.storage.v1"
+ + ".ArrowSerializationOptionsB\003\340A\001H\000\022e\n\032avr"
+ + "o_serialization_options\030\004 \001(\0132:.google.c"
+ + "loud.bigquery.storage.v1.AvroSerializati"
+ + "onOptionsB\003\340A\001H\000B%\n#output_format_serial"
+ + "ization_options:k\352Ah\n*bigquerystorage.go"
+ + "ogleapis.com/ReadSession\022:projects/{proj"
+ + "ect}/locations/{location}/sessions/{sess"
+ + "ion}B\010\n\006schema\"\234\001\n\nReadStream\022\021\n\004name\030\001 "
+ + "\001(\tB\003\340A\003:{\352Ax\n)bigquerystorage.googleapi"
+ + "s.com/ReadStream\022Kprojects/{project}/loc"
+ + "ations/{location}/sessions/{session}/str"
+ + "eams/{stream}\"\373\004\n\013WriteStream\022\021\n\004name\030\001 "
+ + "\001(\tB\003\340A\003\022E\n\004type\030\002 \001(\01622.google.cloud.bi"
+ + "gquery.storage.v1.WriteStream.TypeB\003\340A\005\022"
+ + "4\n\013create_time\030\003 \001(\0132\032.google.protobuf.T"
+ + "imestampB\003\340A\003\0224\n\013commit_time\030\004 \001(\0132\032.goo"
+ + "gle.protobuf.TimestampB\003\340A\003\022H\n\014table_sch"
+ + "ema\030\005 \001(\0132-.google.cloud.bigquery.storag"
+ + "e.v1.TableSchemaB\003\340A\003\022P\n\nwrite_mode\030\007 \001("
+ + "\01627.google.cloud.bigquery.storage.v1.Wri"
+ + "teStream.WriteModeB\003\340A\005\022\025\n\010location\030\010 \001("
+ + "\tB\003\340A\005\"F\n\004Type\022\024\n\020TYPE_UNSPECIFIED\020\000\022\r\n\t"
+ + "COMMITTED\020\001\022\013\n\007PENDING\020\002\022\014\n\010BUFFERED\020\003\"3"
+ + "\n\tWriteMode\022\032\n\026WRITE_MODE_UNSPECIFIED\020\000\022"
+ + "\n\n\006INSERT\020\001:v\352As\n*bigquerystorage.google"
+ + "apis.com/WriteStream\022Eprojects/{project}"
+ + "/datasets/{dataset}/tables/{table}/strea"
+ + "ms/{stream}*>\n\nDataFormat\022\033\n\027DATA_FORMAT"
+ + "_UNSPECIFIED\020\000\022\010\n\004AVRO\020\001\022\t\n\005ARROW\020\002*I\n\017W"
+ + "riteStreamView\022!\n\035WRITE_STREAM_VIEW_UNSP"
+ + "ECIFIED\020\000\022\t\n\005BASIC\020\001\022\010\n\004FULL\020\002B\304\001\n$com.g"
+ + "oogle.cloud.bigquery.storage.v1B\013StreamP"
+ + "rotoP\001ZGgoogle.golang.org/genproto/googl"
+ + "eapis/cloud/bigquery/storage/v1;storage\252"
+ + "\002 Google.Cloud.BigQuery.Storage.V1\312\002 Goo"
+ + "gle\\Cloud\\BigQuery\\Storage\\V1b\006proto3"
};
descriptor =
com.google.protobuf.Descriptors.FileDescriptor.internalBuildGeneratedFileFrom(
@@ -151,6 +151,7 @@ public static com.google.protobuf.Descriptors.FileDescriptor getDescriptor() {
"ReadOptions",
"Streams",
"EstimatedTotalBytesScanned",
+ "EstimatedRowCount",
"TraceId",
"Schema",
});
diff --git a/proto-google-cloud-bigquerystorage-v1/src/main/java/com/google/cloud/bigquery/storage/v1/TableFieldSchema.java b/proto-google-cloud-bigquerystorage-v1/src/main/java/com/google/cloud/bigquery/storage/v1/TableFieldSchema.java
index 1bdafd6116..db75cb4463 100644
--- a/proto-google-cloud-bigquerystorage-v1/src/main/java/com/google/cloud/bigquery/storage/v1/TableFieldSchema.java
+++ b/proto-google-cloud-bigquerystorage-v1/src/main/java/com/google/cloud/bigquery/storage/v1/TableFieldSchema.java
@@ -776,7 +776,8 @@ public com.google.cloud.bigquery.storage.v1.TableFieldSchema.Mode getMode() {
*
*
*
- * Optional. Describes the nested schema fields if the type property is set to STRUCT.
+ * Optional. Describes the nested schema fields if the type property is set to
+ * STRUCT.
*
*
*
@@ -791,7 +792,8 @@ public java.util.List get
*
*
*
- * Optional. Describes the nested schema fields if the type property is set to STRUCT.
+ * Optional. Describes the nested schema fields if the type property is set to
+ * STRUCT.
*
*
*
@@ -807,7 +809,8 @@ public java.util.List get
*
*
*
- * Optional. Describes the nested schema fields if the type property is set to STRUCT.
+ * Optional. Describes the nested schema fields if the type property is set to
+ * STRUCT.
*
*
*
@@ -822,7 +825,8 @@ public int getFieldsCount() {
*
*
*
- * Optional. Describes the nested schema fields if the type property is set to STRUCT.
+ * Optional. Describes the nested schema fields if the type property is set to
+ * STRUCT.
*
*
*
@@ -837,7 +841,8 @@ public com.google.cloud.bigquery.storage.v1.TableFieldSchema getFields(int index
*
*
*
- * Optional. Describes the nested schema fields if the type property is set to STRUCT.
+ * Optional. Describes the nested schema fields if the type property is set to
+ * STRUCT.
*
*
*
@@ -1862,7 +1867,8 @@ private void ensureFieldsIsMutable() {
*
*
*
- * Optional. Describes the nested schema fields if the type property is set to STRUCT.
+ * Optional. Describes the nested schema fields if the type property is set to
+ * STRUCT.
*
*
*
@@ -1880,7 +1886,8 @@ public java.util.List get
*
*
*
- * Optional. Describes the nested schema fields if the type property is set to STRUCT.
+ * Optional. Describes the nested schema fields if the type property is set to
+ * STRUCT.
*
*
*
@@ -1898,7 +1905,8 @@ public int getFieldsCount() {
*
*
*
- * Optional. Describes the nested schema fields if the type property is set to STRUCT.
+ * Optional. Describes the nested schema fields if the type property is set to
+ * STRUCT.
*
*
*
@@ -1916,7 +1924,8 @@ public com.google.cloud.bigquery.storage.v1.TableFieldSchema getFields(int index
*
*
*
- * Optional. Describes the nested schema fields if the type property is set to STRUCT.
+ * Optional. Describes the nested schema fields if the type property is set to
+ * STRUCT.
*
*
*
@@ -1941,7 +1950,8 @@ public Builder setFields(
*
*
*
- * Optional. Describes the nested schema fields if the type property is set to STRUCT.
+ * Optional. Describes the nested schema fields if the type property is set to
+ * STRUCT.
*
*
*
@@ -1963,7 +1973,8 @@ public Builder setFields(
*
*
*
- * Optional. Describes the nested schema fields if the type property is set to STRUCT.
+ * Optional. Describes the nested schema fields if the type property is set to
+ * STRUCT.
*
*
*
@@ -1987,7 +1998,8 @@ public Builder addFields(com.google.cloud.bigquery.storage.v1.TableFieldSchema v
*
*
*
- * Optional. Describes the nested schema fields if the type property is set to STRUCT.
+ * Optional. Describes the nested schema fields if the type property is set to
+ * STRUCT.
*
*
*
@@ -2012,7 +2024,8 @@ public Builder addFields(
*
*
*
- * Optional. Describes the nested schema fields if the type property is set to STRUCT.
+ * Optional. Describes the nested schema fields if the type property is set to
+ * STRUCT.
*
*
*
@@ -2034,7 +2047,8 @@ public Builder addFields(
*
*
*
- * Optional. Describes the nested schema fields if the type property is set to STRUCT.
+ * Optional. Describes the nested schema fields if the type property is set to
+ * STRUCT.
*
*
*
@@ -2056,7 +2070,8 @@ public Builder addFields(
*
*
*
- * Optional. Describes the nested schema fields if the type property is set to STRUCT.
+ * Optional. Describes the nested schema fields if the type property is set to
+ * STRUCT.
*
*
*
@@ -2079,7 +2094,8 @@ public Builder addAllFields(
*
*
*
- * Optional. Describes the nested schema fields if the type property is set to STRUCT.
+ * Optional. Describes the nested schema fields if the type property is set to
+ * STRUCT.
*
*
*
@@ -2100,7 +2116,8 @@ public Builder clearFields() {
*
*
*
- * Optional. Describes the nested schema fields if the type property is set to STRUCT.
+ * Optional. Describes the nested schema fields if the type property is set to
+ * STRUCT.
*
*
*
@@ -2121,7 +2138,8 @@ public Builder removeFields(int index) {
*
*
*
- * Optional. Describes the nested schema fields if the type property is set to STRUCT.
+ * Optional. Describes the nested schema fields if the type property is set to
+ * STRUCT.
*
*
*
@@ -2136,7 +2154,8 @@ public com.google.cloud.bigquery.storage.v1.TableFieldSchema.Builder getFieldsBu
*
*
*
- * Optional. Describes the nested schema fields if the type property is set to STRUCT.
+ * Optional. Describes the nested schema fields if the type property is set to
+ * STRUCT.
*
*
*
@@ -2155,7 +2174,8 @@ public com.google.cloud.bigquery.storage.v1.TableFieldSchemaOrBuilder getFieldsO
*
*
*
- * Optional. Describes the nested schema fields if the type property is set to STRUCT.
+ * Optional. Describes the nested schema fields if the type property is set to
+ * STRUCT.
*
*
*
@@ -2174,7 +2194,8 @@ public com.google.cloud.bigquery.storage.v1.TableFieldSchemaOrBuilder getFieldsO
*
*
*
- * Optional. Describes the nested schema fields if the type property is set to STRUCT.
+ * Optional. Describes the nested schema fields if the type property is set to
+ * STRUCT.
*
*
*
@@ -2189,7 +2210,8 @@ public com.google.cloud.bigquery.storage.v1.TableFieldSchema.Builder addFieldsBu
*
*
*
- * Optional. Describes the nested schema fields if the type property is set to STRUCT.
+ * Optional. Describes the nested schema fields if the type property is set to
+ * STRUCT.
*
*
*
@@ -2206,7 +2228,8 @@ public com.google.cloud.bigquery.storage.v1.TableFieldSchema.Builder addFieldsBu
*
*
*
- * Optional. Describes the nested schema fields if the type property is set to STRUCT.
+ * Optional. Describes the nested schema fields if the type property is set to
+ * STRUCT.
*
*
*
diff --git a/proto-google-cloud-bigquerystorage-v1/src/main/java/com/google/cloud/bigquery/storage/v1/TableFieldSchemaOrBuilder.java b/proto-google-cloud-bigquerystorage-v1/src/main/java/com/google/cloud/bigquery/storage/v1/TableFieldSchemaOrBuilder.java
index d011684437..9d916b387a 100644
--- a/proto-google-cloud-bigquerystorage-v1/src/main/java/com/google/cloud/bigquery/storage/v1/TableFieldSchemaOrBuilder.java
+++ b/proto-google-cloud-bigquerystorage-v1/src/main/java/com/google/cloud/bigquery/storage/v1/TableFieldSchemaOrBuilder.java
@@ -114,7 +114,8 @@ public interface TableFieldSchemaOrBuilder
*
*
*
- * Optional. Describes the nested schema fields if the type property is set to STRUCT.
+ * Optional. Describes the nested schema fields if the type property is set to
+ * STRUCT.
*
*
*
@@ -126,7 +127,8 @@ public interface TableFieldSchemaOrBuilder
*
*
*
- * Optional. Describes the nested schema fields if the type property is set to STRUCT.
+ * Optional. Describes the nested schema fields if the type property is set to
+ * STRUCT.
*
*
*
@@ -138,7 +140,8 @@ public interface TableFieldSchemaOrBuilder
*
*
*
- * Optional. Describes the nested schema fields if the type property is set to STRUCT.
+ * Optional. Describes the nested schema fields if the type property is set to
+ * STRUCT.
*
*
*
@@ -150,7 +153,8 @@ public interface TableFieldSchemaOrBuilder
*
*
*
- * Optional. Describes the nested schema fields if the type property is set to STRUCT.
+ * Optional. Describes the nested schema fields if the type property is set to
+ * STRUCT.
*
*
*
@@ -163,7 +167,8 @@ public interface TableFieldSchemaOrBuilder
*
*
*
- * Optional. Describes the nested schema fields if the type property is set to STRUCT.
+ * Optional. Describes the nested schema fields if the type property is set to
+ * STRUCT.
*
*
*
diff --git a/proto-google-cloud-bigquerystorage-v1/src/main/java/com/google/cloud/bigquery/storage/v1/WriteStream.java b/proto-google-cloud-bigquerystorage-v1/src/main/java/com/google/cloud/bigquery/storage/v1/WriteStream.java
index 3ed3425fd2..9988699989 100644
--- a/proto-google-cloud-bigquerystorage-v1/src/main/java/com/google/cloud/bigquery/storage/v1/WriteStream.java
+++ b/proto-google-cloud-bigquerystorage-v1/src/main/java/com/google/cloud/bigquery/storage/v1/WriteStream.java
@@ -487,8 +487,8 @@ public com.google.cloud.bigquery.storage.v1.WriteStream.Type getType() {
*
*
*
- * Output only. Create time of the stream. For the _default stream, this is the
- * creation_time of the table.
+ * Output only. Create time of the stream. For the _default stream, this is
+ * the creation_time of the table.
*
*
* .google.protobuf.Timestamp create_time = 3 [(.google.api.field_behavior) = OUTPUT_ONLY];
@@ -504,8 +504,8 @@ public boolean hasCreateTime() {
*
*
*
- * Output only. Create time of the stream. For the _default stream, this is the
- * creation_time of the table.
+ * Output only. Create time of the stream. For the _default stream, this is
+ * the creation_time of the table.
*
*
* .google.protobuf.Timestamp create_time = 3 [(.google.api.field_behavior) = OUTPUT_ONLY];
@@ -521,8 +521,8 @@ public com.google.protobuf.Timestamp getCreateTime() {
*
*
*
- * Output only. Create time of the stream. For the _default stream, this is the
- * creation_time of the table.
+ * Output only. Create time of the stream. For the _default stream, this is
+ * the creation_time of the table.
*
*
* .google.protobuf.Timestamp create_time = 3 [(.google.api.field_behavior) = OUTPUT_ONLY];
@@ -1475,8 +1475,8 @@ public Builder clearType() {
*
*
*
- * Output only. Create time of the stream. For the _default stream, this is the
- * creation_time of the table.
+ * Output only. Create time of the stream. For the _default stream, this is
+ * the creation_time of the table.
*
*
*
@@ -1492,8 +1492,8 @@ public boolean hasCreateTime() {
*
*
*
- * Output only. Create time of the stream. For the _default stream, this is the
- * creation_time of the table.
+ * Output only. Create time of the stream. For the _default stream, this is
+ * the creation_time of the table.
*
*
*
@@ -1515,8 +1515,8 @@ public com.google.protobuf.Timestamp getCreateTime() {
*
*
*
- * Output only. Create time of the stream. For the _default stream, this is the
- * creation_time of the table.
+ * Output only. Create time of the stream. For the _default stream, this is
+ * the creation_time of the table.
*
*
*
@@ -1540,8 +1540,8 @@ public Builder setCreateTime(com.google.protobuf.Timestamp value) {
*
*
*
- * Output only. Create time of the stream. For the _default stream, this is the
- * creation_time of the table.
+ * Output only. Create time of the stream. For the _default stream, this is
+ * the creation_time of the table.
*
*
*
@@ -1562,8 +1562,8 @@ public Builder setCreateTime(com.google.protobuf.Timestamp.Builder builderForVal
*
*
*
- * Output only. Create time of the stream. For the _default stream, this is the
- * creation_time of the table.
+ * Output only. Create time of the stream. For the _default stream, this is
+ * the creation_time of the table.
*
*
*
@@ -1589,8 +1589,8 @@ public Builder mergeCreateTime(com.google.protobuf.Timestamp value) {
*
*
*
- * Output only. Create time of the stream. For the _default stream, this is the
- * creation_time of the table.
+ * Output only. Create time of the stream. For the _default stream, this is
+ * the creation_time of the table.
*
*
*
@@ -1612,8 +1612,8 @@ public Builder clearCreateTime() {
*
*
*
- * Output only. Create time of the stream. For the _default stream, this is the
- * creation_time of the table.
+ * Output only. Create time of the stream. For the _default stream, this is
+ * the creation_time of the table.
*
*
*
@@ -1629,8 +1629,8 @@ public com.google.protobuf.Timestamp.Builder getCreateTimeBuilder() {
*
*
*
- * Output only. Create time of the stream. For the _default stream, this is the
- * creation_time of the table.
+ * Output only. Create time of the stream. For the _default stream, this is
+ * the creation_time of the table.
*
*
*
@@ -1650,8 +1650,8 @@ public com.google.protobuf.TimestampOrBuilder getCreateTimeOrBuilder() {
*
*
*
- * Output only. Create time of the stream. For the _default stream, this is the
- * creation_time of the table.
+ * Output only. Create time of the stream. For the _default stream, this is
+ * the creation_time of the table.
*
*
*
diff --git a/proto-google-cloud-bigquerystorage-v1/src/main/java/com/google/cloud/bigquery/storage/v1/WriteStreamOrBuilder.java b/proto-google-cloud-bigquerystorage-v1/src/main/java/com/google/cloud/bigquery/storage/v1/WriteStreamOrBuilder.java
index 27aca3d4f1..dcffa68a6f 100644
--- a/proto-google-cloud-bigquerystorage-v1/src/main/java/com/google/cloud/bigquery/storage/v1/WriteStreamOrBuilder.java
+++ b/proto-google-cloud-bigquerystorage-v1/src/main/java/com/google/cloud/bigquery/storage/v1/WriteStreamOrBuilder.java
@@ -83,8 +83,8 @@ public interface WriteStreamOrBuilder
*
*
*
- * Output only. Create time of the stream. For the _default stream, this is the
- * creation_time of the table.
+ * Output only. Create time of the stream. For the _default stream, this is
+ * the creation_time of the table.
*
*
* .google.protobuf.Timestamp create_time = 3 [(.google.api.field_behavior) = OUTPUT_ONLY];
@@ -97,8 +97,8 @@ public interface WriteStreamOrBuilder
*
*
*
- * Output only. Create time of the stream. For the _default stream, this is the
- * creation_time of the table.
+ * Output only. Create time of the stream. For the _default stream, this is
+ * the creation_time of the table.
*
*
* .google.protobuf.Timestamp create_time = 3 [(.google.api.field_behavior) = OUTPUT_ONLY];
@@ -111,8 +111,8 @@ public interface WriteStreamOrBuilder
*
*
*
- * Output only. Create time of the stream. For the _default stream, this is the
- * creation_time of the table.
+ * Output only. Create time of the stream. For the _default stream, this is
+ * the creation_time of the table.
*
*
* .google.protobuf.Timestamp create_time = 3 [(.google.api.field_behavior) = OUTPUT_ONLY];
diff --git a/proto-google-cloud-bigquerystorage-v1/src/main/proto/google/cloud/bigquery/storage/v1/storage.proto b/proto-google-cloud-bigquerystorage-v1/src/main/proto/google/cloud/bigquery/storage/v1/storage.proto
index b01ed271ae..85daf6dfa2 100644
--- a/proto-google-cloud-bigquerystorage-v1/src/main/proto/google/cloud/bigquery/storage/v1/storage.proto
+++ b/proto-google-cloud-bigquerystorage-v1/src/main/proto/google/cloud/bigquery/storage/v1/storage.proto
@@ -73,7 +73,8 @@ service BigQueryRead {
post: "/v1/{read_session.table=projects/*/datasets/*/tables/*}"
body: "*"
};
- option (google.api.method_signature) = "parent,read_session,max_stream_count";
+ option (google.api.method_signature) =
+ "parent,read_session,max_stream_count";
}
// Reads rows from the stream in the format prescribed by the ReadSession.
@@ -102,7 +103,8 @@ service BigQueryRead {
// original, primary, and residual, that original[0-j] = primary[0-j] and
// original[j-n] = residual[0-m] once the streams have been read to
// completion.
- rpc SplitReadStream(SplitReadStreamRequest) returns (SplitReadStreamResponse) {
+ rpc SplitReadStream(SplitReadStreamRequest)
+ returns (SplitReadStreamResponse) {
option (google.api.http) = {
get: "/v1/{name=projects/*/locations/*/sessions/*/streams/*}"
};
@@ -186,7 +188,8 @@ service BigQueryWrite {
// Finalize a write stream so that no new data can be appended to the
// stream. Finalize is not supported on the '_default' stream.
- rpc FinalizeWriteStream(FinalizeWriteStreamRequest) returns (FinalizeWriteStreamResponse) {
+ rpc FinalizeWriteStream(FinalizeWriteStreamRequest)
+ returns (FinalizeWriteStreamResponse) {
option (google.api.http) = {
post: "/v1/{name=projects/*/datasets/*/tables/*/streams/*}"
body: "*"
@@ -200,7 +203,8 @@ service BigQueryWrite {
// Streams must be finalized before commit and cannot be committed multiple
// times. Once a stream is committed, data in the stream becomes available
// for read operations.
- rpc BatchCommitWriteStreams(BatchCommitWriteStreamsRequest) returns (BatchCommitWriteStreamsResponse) {
+ rpc BatchCommitWriteStreams(BatchCommitWriteStreamsRequest)
+ returns (BatchCommitWriteStreamsResponse) {
option (google.api.http) = {
get: "/v1/{parent=projects/*/datasets/*/tables/*}"
};
@@ -384,9 +388,7 @@ message CreateWriteStreamRequest {
// of `projects/{project}/datasets/{dataset}/tables/{table}`.
string parent = 1 [
(google.api.field_behavior) = REQUIRED,
- (google.api.resource_reference) = {
- type: "bigquery.googleapis.com/Table"
- }
+ (google.api.resource_reference) = { type: "bigquery.googleapis.com/Table" }
];
// Required. Stream to be created.
@@ -434,10 +436,10 @@ message AppendRowsRequest {
DEFAULT_VALUE = 2;
}
- // Required. The write_stream identifies the target of the append operation, and only
- // needs to be specified as part of the first request on the gRPC connection.
- // If provided for subsequent requests, it must match the value of the first
- // request.
+ // Required. The write_stream identifies the target of the append operation,
+ // and only needs to be specified as part of the first request on the gRPC
+ // connection. If provided for subsequent requests, it must match the value of
+ // the first request.
//
// For explicitly created write streams, the format is:
//
@@ -562,13 +564,11 @@ message GetWriteStreamRequest {
// Request message for `BatchCommitWriteStreams`.
message BatchCommitWriteStreamsRequest {
- // Required. Parent table that all the streams should belong to, in the form of
- // `projects/{project}/datasets/{dataset}/tables/{table}`.
+ // Required. Parent table that all the streams should belong to, in the form
+ // of `projects/{project}/datasets/{dataset}/tables/{table}`.
string parent = 1 [
(google.api.field_behavior) = REQUIRED,
- (google.api.resource_reference) = {
- type: "bigquery.googleapis.com/Table"
- }
+ (google.api.resource_reference) = { type: "bigquery.googleapis.com/Table" }
];
// Required. The group of streams that will be committed atomically.
diff --git a/proto-google-cloud-bigquerystorage-v1/src/main/proto/google/cloud/bigquery/storage/v1/stream.proto b/proto-google-cloud-bigquerystorage-v1/src/main/proto/google/cloud/bigquery/storage/v1/stream.proto
index fe71adfa6b..ec137de19d 100644
--- a/proto-google-cloud-bigquerystorage-v1/src/main/proto/google/cloud/bigquery/storage/v1/stream.proto
+++ b/proto-google-cloud-bigquerystorage-v1/src/main/proto/google/cloud/bigquery/storage/v1/stream.proto
@@ -122,10 +122,12 @@ message ReadSession {
oneof output_format_serialization_options {
// Optional. Options specific to the Apache Arrow output format.
- ArrowSerializationOptions arrow_serialization_options = 3 [(google.api.field_behavior) = OPTIONAL];
+ ArrowSerializationOptions arrow_serialization_options = 3
+ [(google.api.field_behavior) = OPTIONAL];
// Optional. Options specific to the Apache Avro output format
- AvroSerializationOptions avro_serialization_options = 4 [(google.api.field_behavior) = OPTIONAL];
+ AvroSerializationOptions avro_serialization_options = 4
+ [(google.api.field_behavior) = OPTIONAL];
}
}
@@ -133,12 +135,15 @@ message ReadSession {
// `projects/{project_id}/locations/{location}/sessions/{session_id}`.
string name = 1 [(google.api.field_behavior) = OUTPUT_ONLY];
- // Output only. Time at which the session becomes invalid. After this time, subsequent
- // requests to read this Session will return errors. The expire_time is
- // automatically assigned and currently cannot be specified or updated.
- google.protobuf.Timestamp expire_time = 2 [(google.api.field_behavior) = OUTPUT_ONLY];
+ // Output only. Time at which the session becomes invalid. After this time,
+ // subsequent requests to read this Session will return errors. The
+ // expire_time is automatically assigned and currently cannot be specified or
+ // updated.
+ google.protobuf.Timestamp expire_time = 2
+ [(google.api.field_behavior) = OUTPUT_ONLY];
- // Immutable. Data format of the output data. DATA_FORMAT_UNSPECIFIED not supported.
+ // Immutable. Data format of the output data. DATA_FORMAT_UNSPECIFIED not
+ // supported.
DataFormat data_format = 3 [(google.api.field_behavior) = IMMUTABLE];
// The schema for the read. If read_options.selected_fields is set, the
@@ -156,12 +161,11 @@ message ReadSession {
// `projects/{project_id}/datasets/{dataset_id}/tables/{table_id}`
string table = 6 [
(google.api.field_behavior) = IMMUTABLE,
- (google.api.resource_reference) = {
- type: "bigquery.googleapis.com/Table"
- }
+ (google.api.resource_reference) = { type: "bigquery.googleapis.com/Table" }
];
- // Optional. Any modifiers which are applied when reading from the specified table.
+ // Optional. Any modifiers which are applied when reading from the specified
+ // table.
TableModifiers table_modifiers = 7 [(google.api.field_behavior) = OPTIONAL];
// Optional. Read options for this session (e.g. column selection, filters).
@@ -178,10 +182,16 @@ message ReadSession {
// Output only. An estimate on the number of bytes this session will scan when
// all streams are completely consumed. This estimate is based on
// metadata from the table which might be incomplete or stale.
- int64 estimated_total_bytes_scanned = 12 [(google.api.field_behavior) = OUTPUT_ONLY];
+ int64 estimated_total_bytes_scanned = 12
+ [(google.api.field_behavior) = OUTPUT_ONLY];
+
+ // Output only. An estimate on the number of rows present in this session's
+ // streams. This estimate is based on metadata from the table which might be
+ // incomplete or stale.
+ int64 estimated_row_count = 14 [(google.api.field_behavior) = OUTPUT_ONLY];
- // Optional. ID set by client to annotate a session identity. This does not need
- // to be strictly unique, but instead the same ID should be used to group
+ // Optional. ID set by client to annotate a session identity. This does not
+ // need to be strictly unique, but instead the same ID should be used to group
// logically connected sessions (e.g. All using the same ID for all sessions
// needed to complete a Spark SQL query is reasonable).
//
@@ -260,15 +270,17 @@ message WriteStream {
// Immutable. Type of the stream.
Type type = 2 [(google.api.field_behavior) = IMMUTABLE];
- // Output only. Create time of the stream. For the _default stream, this is the
- // creation_time of the table.
- google.protobuf.Timestamp create_time = 3 [(google.api.field_behavior) = OUTPUT_ONLY];
+ // Output only. Create time of the stream. For the _default stream, this is
+ // the creation_time of the table.
+ google.protobuf.Timestamp create_time = 3
+ [(google.api.field_behavior) = OUTPUT_ONLY];
// Output only. Commit time of the stream.
// If a stream is of `COMMITTED` type, then it will have a commit_time same as
// `create_time`. If the stream is of `PENDING` type, empty commit_time
// means it is not committed.
- google.protobuf.Timestamp commit_time = 4 [(google.api.field_behavior) = OUTPUT_ONLY];
+ google.protobuf.Timestamp commit_time = 4
+ [(google.api.field_behavior) = OUTPUT_ONLY];
// Output only. The schema of the destination table. It is only returned in
// `CreateWriteStream` response. Caller should generate data that's
diff --git a/proto-google-cloud-bigquerystorage-v1/src/main/proto/google/cloud/bigquery/storage/v1/table.proto b/proto-google-cloud-bigquerystorage-v1/src/main/proto/google/cloud/bigquery/storage/v1/table.proto
index fa4f840c58..57e7933424 100644
--- a/proto-google-cloud-bigquerystorage-v1/src/main/proto/google/cloud/bigquery/storage/v1/table.proto
+++ b/proto-google-cloud-bigquerystorage-v1/src/main/proto/google/cloud/bigquery/storage/v1/table.proto
@@ -107,7 +107,8 @@ message TableFieldSchema {
// Optional. The field mode. The default value is NULLABLE.
Mode mode = 3 [(google.api.field_behavior) = OPTIONAL];
- // Optional. Describes the nested schema fields if the type property is set to STRUCT.
+ // Optional. Describes the nested schema fields if the type property is set to
+ // STRUCT.
repeated TableFieldSchema fields = 4 [(google.api.field_behavior) = OPTIONAL];
// Optional. The field description. The maximum length is 1,024 characters.
From e9e7ac3d4e655f7b77d830108226891c45464069 Mon Sep 17 00:00:00 2001
From: Mend Renovate
Date: Thu, 15 Dec 2022 19:53:51 +0100
Subject: [PATCH 3/8] deps: update dependency
com.google.cloud:google-cloud-bigquery to v2.20.0 (#1912)
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
* deps: update dependency com.google.cloud:google-cloud-bigquery to v2.20.0
* 🦉 Updates from OwlBot post-processor
See https://github.com/googleapis/repo-automation-bots/blob/main/packages/owl-bot/README.md
Co-authored-by: Owl Bot
---
pom.xml | 2 +-
samples/install-without-bom/pom.xml | 2 +-
samples/snapshot/pom.xml | 2 +-
samples/snippets/pom.xml | 2 +-
tutorials/JsonWriterDefaultStream/pom.xml | 2 +-
5 files changed, 5 insertions(+), 5 deletions(-)
diff --git a/pom.xml b/pom.xml
index b5f689abd8..b508f7f6db 100644
--- a/pom.xml
+++ b/pom.xml
@@ -132,7 +132,7 @@
com.google.cloud
google-cloud-bigquery
- 2.19.1
+ 2.20.0
test
diff --git a/samples/install-without-bom/pom.xml b/samples/install-without-bom/pom.xml
index a64e59bfc9..0731c1a60d 100644
--- a/samples/install-without-bom/pom.xml
+++ b/samples/install-without-bom/pom.xml
@@ -37,7 +37,7 @@
com.google.cloud
google-cloud-bigquery
- 2.19.1
+ 2.20.0
org.apache.avro
diff --git a/samples/snapshot/pom.xml b/samples/snapshot/pom.xml
index 7a36c4cbf7..d48dc39af0 100644
--- a/samples/snapshot/pom.xml
+++ b/samples/snapshot/pom.xml
@@ -36,7 +36,7 @@
com.google.cloud
google-cloud-bigquery
- 2.19.1
+ 2.20.0
org.apache.avro
diff --git a/samples/snippets/pom.xml b/samples/snippets/pom.xml
index a093a969af..445e77b289 100644
--- a/samples/snippets/pom.xml
+++ b/samples/snippets/pom.xml
@@ -48,7 +48,7 @@
com.google.cloud
google-cloud-bigquery
- 2.19.1
+ 2.20.0
org.apache.avro
diff --git a/tutorials/JsonWriterDefaultStream/pom.xml b/tutorials/JsonWriterDefaultStream/pom.xml
index aaeff34348..7b86d9520f 100644
--- a/tutorials/JsonWriterDefaultStream/pom.xml
+++ b/tutorials/JsonWriterDefaultStream/pom.xml
@@ -24,7 +24,7 @@
com.google.cloud
google-cloud-bigquery
- 2.19.1
+ 2.20.0
org.apache.avro
From 2d38f5f856cfcf77920f0c7a799dd5fa2616a911 Mon Sep 17 00:00:00 2001
From: Mend Renovate
Date: Wed, 4 Jan 2023 17:16:35 +0100
Subject: [PATCH 4/8] chore(deps): update dependency
com.google.cloud:libraries-bom to v26.2.0 (#1915)
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
* chore(deps): update dependency com.google.cloud:libraries-bom to v26.2.0
* 🦉 Updates from OwlBot post-processor
See https://github.com/googleapis/repo-automation-bots/blob/main/packages/owl-bot/README.md
Co-authored-by: Owl Bot
---
README.md | 4 ++--
samples/snippets/pom.xml | 2 +-
2 files changed, 3 insertions(+), 3 deletions(-)
diff --git a/README.md b/README.md
index 6774ebb0b5..cea68e863c 100644
--- a/README.md
+++ b/README.md
@@ -19,7 +19,7 @@ If you are using Maven with [BOM][libraries-bom], add this to your pom.xml file:
com.google.cloud
libraries-bom
- 26.1.5
+ 26.2.0
pom
import
@@ -49,7 +49,7 @@ If you are using Maven without BOM, add this to your dependencies:
If you are using Gradle 5.x or later, add this to your dependencies:
```Groovy
-implementation platform('com.google.cloud:libraries-bom:26.1.5')
+implementation platform('com.google.cloud:libraries-bom:26.2.0')
implementation 'com.google.cloud:google-cloud-bigquerystorage'
```
diff --git a/samples/snippets/pom.xml b/samples/snippets/pom.xml
index 445e77b289..67529d7ae4 100644
--- a/samples/snippets/pom.xml
+++ b/samples/snippets/pom.xml
@@ -31,7 +31,7 @@
com.google.cloud
libraries-bom
- 26.1.5
+ 26.2.0
pom
import
From dfe2ae35b62dce9f88cb4b7ac5102413c48b0686 Mon Sep 17 00:00:00 2001
From: "gcf-owl-bot[bot]" <78513119+gcf-owl-bot[bot]@users.noreply.github.com>
Date: Wed, 4 Jan 2023 11:17:26 -0500
Subject: [PATCH 5/8] build(deps): bump certifi from 2022.9.24 to 2022.12.7 in
/synthtool/gcp/templates/java_library/.kokoro (#1732) (#1910)
build(deps): bump certifi
Bumps [certifi](https://github.com/certifi/python-certifi) from 2022.9.24 to 2022.12.7.
- [Release notes](https://github.com/certifi/python-certifi/releases)
- [Commits](https://github.com/certifi/python-certifi/compare/2022.09.24...2022.12.07)
---
updated-dependencies:
- dependency-name: certifi
dependency-type: direct:production
...
Signed-off-by: dependabot[bot]
Signed-off-by: dependabot[bot]
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
Co-authored-by: Jeff Ching
Source-Link: https://github.com/googleapis/synthtool/commit/ae0d43e5f17972981fe501ecf5a5d20055128bea
Post-Processor: gcr.io/cloud-devrel-public-resources/owlbot-java:latest@sha256:9de537d592b60e5eac73b374a28263969bae91ecdb29b445e894576fbf54851c
Signed-off-by: dependabot[bot]
Co-authored-by: Owl Bot
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
Co-authored-by: Jeff Ching
---
.github/.OwlBot.lock.yaml | 2 +-
.kokoro/requirements.in | 2 +-
.kokoro/requirements.txt | 6 +++---
3 files changed, 5 insertions(+), 5 deletions(-)
diff --git a/.github/.OwlBot.lock.yaml b/.github/.OwlBot.lock.yaml
index 4ca0036da3..288e394897 100644
--- a/.github/.OwlBot.lock.yaml
+++ b/.github/.OwlBot.lock.yaml
@@ -13,4 +13,4 @@
# limitations under the License.
docker:
image: gcr.io/cloud-devrel-public-resources/owlbot-java:latest
- digest: sha256:27b1b1884dce60460d7521b23c2a73376cba90c0ef3d9f0d32e4bdb786959cfd
+ digest: sha256:9de537d592b60e5eac73b374a28263969bae91ecdb29b445e894576fbf54851c
diff --git a/.kokoro/requirements.in b/.kokoro/requirements.in
index 924f94ae6f..a5010f77d4 100644
--- a/.kokoro/requirements.in
+++ b/.kokoro/requirements.in
@@ -17,7 +17,7 @@ pycparser==2.21
pyperclip==1.8.2
python-dateutil==2.8.2
requests==2.27.1
-certifi==2022.9.24
+certifi==2022.12.7
importlib-metadata==4.8.3
zipp==3.6.0
google_api_core==2.8.2
diff --git a/.kokoro/requirements.txt b/.kokoro/requirements.txt
index 71fcafc703..15c404aa5a 100644
--- a/.kokoro/requirements.txt
+++ b/.kokoro/requirements.txt
@@ -16,9 +16,9 @@ cachetools==4.2.4 \
# via
# -r requirements.in
# google-auth
-certifi==2022.9.24 \
- --hash=sha256:0d9c601124e5a6ba9712dbc60d9c53c21e34f5f641fe83002317394311bdce14 \
- --hash=sha256:90c1a32f1d68f940488354e36370f6cca89f0f106db09518524c88d6ed83f382
+certifi==2022.12.7 \
+ --hash=sha256:35824b4c3a97115964b408844d64aa14db1cc518f6562e8d7261699d1350a9e3 \
+ --hash=sha256:4ad3232f5e926d6718ec31cfc1fcadfde020920e278684144551c91769c7bc18
# via
# -r requirements.in
# requests
From a0a5d52cdd06739992944126a89fe58daf4ee605 Mon Sep 17 00:00:00 2001
From: Mend Renovate
Date: Wed, 4 Jan 2023 17:48:51 +0100
Subject: [PATCH 6/8] deps: update dependency org.json:json to v20220924
(#1799)
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
* deps: update dependency org.json:json to v20220924
* 🦉 Updates from OwlBot post-processor
See https://github.com/googleapis/repo-automation-bots/blob/main/packages/owl-bot/README.md
* Update JsonToProtoMessageTest.java
Co-authored-by: Owl Bot
Co-authored-by: Neenu Shaji
---
.../cloud/bigquery/storage/v1beta2/JsonToProtoMessageTest.java | 2 +-
pom.xml | 2 +-
2 files changed, 2 insertions(+), 2 deletions(-)
diff --git a/google-cloud-bigquerystorage/src/test/java/com/google/cloud/bigquery/storage/v1beta2/JsonToProtoMessageTest.java b/google-cloud-bigquerystorage/src/test/java/com/google/cloud/bigquery/storage/v1beta2/JsonToProtoMessageTest.java
index bcdeda813b..c340d22e9a 100644
--- a/google-cloud-bigquerystorage/src/test/java/com/google/cloud/bigquery/storage/v1beta2/JsonToProtoMessageTest.java
+++ b/google-cloud-bigquerystorage/src/test/java/com/google/cloud/bigquery/storage/v1beta2/JsonToProtoMessageTest.java
@@ -935,7 +935,7 @@ public void testRepeatedWithMixedTypes() throws Exception {
Assert.fail("should fail");
} catch (IllegalArgumentException e) {
assertEquals(
- "JSONObject does not have a double field at root.test_repeated[2].", e.getMessage());
+ "JSONObject does not have a double field at root.test_repeated[0].", e.getMessage());
}
}
diff --git a/pom.xml b/pom.xml
index b508f7f6db..c3694c4634 100644
--- a/pom.xml
+++ b/pom.xml
@@ -118,7 +118,7 @@
org.json
json
- 20200518
+ 20220924
From da37e669134742df1c4165264ef2746ea7a1503a Mon Sep 17 00:00:00 2001
From: Mend Renovate
Date: Wed, 4 Jan 2023 17:49:17 +0100
Subject: [PATCH 7/8] chore(deps): update dependency
com.google.cloud:google-cloud-bigquerystorage to v2.27.0 (#1911)
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
* chore(deps): update dependency com.google.cloud:google-cloud-bigquerystorage to v2.27.0
* 🦉 Updates from OwlBot post-processor
See https://github.com/googleapis/repo-automation-bots/blob/main/packages/owl-bot/README.md
Co-authored-by: Owl Bot
---
README.md | 2 +-
samples/install-without-bom/pom.xml | 2 +-
tutorials/JsonWriterDefaultStream/pom.xml | 2 +-
3 files changed, 3 insertions(+), 3 deletions(-)
diff --git a/README.md b/README.md
index cea68e863c..9b46ae61fb 100644
--- a/README.md
+++ b/README.md
@@ -41,7 +41,7 @@ If you are using Maven without BOM, add this to your dependencies:
com.google.cloud
google-cloud-bigquerystorage
- 2.26.0
+ 2.27.0
```
diff --git a/samples/install-without-bom/pom.xml b/samples/install-without-bom/pom.xml
index 0731c1a60d..7521245c58 100644
--- a/samples/install-without-bom/pom.xml
+++ b/samples/install-without-bom/pom.xml
@@ -30,7 +30,7 @@
com.google.cloud
google-cloud-bigquerystorage
- 2.26.0
+ 2.27.0
diff --git a/tutorials/JsonWriterDefaultStream/pom.xml b/tutorials/JsonWriterDefaultStream/pom.xml
index 7b86d9520f..18529616f3 100644
--- a/tutorials/JsonWriterDefaultStream/pom.xml
+++ b/tutorials/JsonWriterDefaultStream/pom.xml
@@ -19,7 +19,7 @@
com.google.cloud
google-cloud-bigquerystorage
- 2.26.0
+ 2.27.0
com.google.cloud
From 86ab29414b61e7b3eea4e3a462de44711889be60 Mon Sep 17 00:00:00 2001
From: "release-please[bot]"
<55107282+release-please[bot]@users.noreply.github.com>
Date: Wed, 4 Jan 2023 18:56:14 +0000
Subject: [PATCH 8/8] chore(main): release 2.28.0 (#1914)
:robot: I have created a release *beep* *boop*
---
## [2.28.0](https://togithub.com/googleapis/java-bigquerystorage/compare/v2.27.0...v2.28.0) (2023-01-04)
### Features
* Add estimated number of rows to CreateReadSession response ([#1913](https://togithub.com/googleapis/java-bigquerystorage/issues/1913)) ([4840b26](https://togithub.com/googleapis/java-bigquerystorage/commit/4840b26956c22e40b6edcefe57f26dd0386e90e5))
### Dependencies
* Update dependency com.google.cloud:google-cloud-bigquery to v2.20.0 ([#1912](https://togithub.com/googleapis/java-bigquerystorage/issues/1912)) ([e9e7ac3](https://togithub.com/googleapis/java-bigquerystorage/commit/e9e7ac3d4e655f7b77d830108226891c45464069))
* Update dependency org.json:json to v20220924 ([#1799](https://togithub.com/googleapis/java-bigquerystorage/issues/1799)) ([a0a5d52](https://togithub.com/googleapis/java-bigquerystorage/commit/a0a5d52cdd06739992944126a89fe58daf4ee605))
---
This PR was generated with [Release Please](https://togithub.com/googleapis/release-please). See [documentation](https://togithub.com/googleapis/release-please#release-please).
---
CHANGELOG.md | 13 +++++++++++++
README.md | 2 +-
google-cloud-bigquerystorage-bom/pom.xml | 16 ++++++++--------
google-cloud-bigquerystorage/pom.xml | 4 ++--
grpc-google-cloud-bigquerystorage-v1/pom.xml | 4 ++--
.../pom.xml | 4 ++--
.../pom.xml | 4 ++--
pom.xml | 16 ++++++++--------
proto-google-cloud-bigquerystorage-v1/pom.xml | 4 ++--
.../pom.xml | 4 ++--
.../pom.xml | 4 ++--
samples/install-without-bom/pom.xml | 2 +-
samples/snapshot/pom.xml | 2 +-
tutorials/JsonWriterDefaultStream/pom.xml | 2 +-
versions.txt | 14 +++++++-------
15 files changed, 54 insertions(+), 41 deletions(-)
diff --git a/CHANGELOG.md b/CHANGELOG.md
index 42b452bca4..eada7c4008 100644
--- a/CHANGELOG.md
+++ b/CHANGELOG.md
@@ -1,5 +1,18 @@
# Changelog
+## [2.28.0](https://github.com/googleapis/java-bigquerystorage/compare/v2.27.0...v2.28.0) (2023-01-04)
+
+
+### Features
+
+* Add estimated number of rows to CreateReadSession response ([#1913](https://github.com/googleapis/java-bigquerystorage/issues/1913)) ([4840b26](https://github.com/googleapis/java-bigquerystorage/commit/4840b26956c22e40b6edcefe57f26dd0386e90e5))
+
+
+### Dependencies
+
+* Update dependency com.google.cloud:google-cloud-bigquery to v2.20.0 ([#1912](https://github.com/googleapis/java-bigquerystorage/issues/1912)) ([e9e7ac3](https://github.com/googleapis/java-bigquerystorage/commit/e9e7ac3d4e655f7b77d830108226891c45464069))
+* Update dependency org.json:json to v20220924 ([#1799](https://github.com/googleapis/java-bigquerystorage/issues/1799)) ([a0a5d52](https://github.com/googleapis/java-bigquerystorage/commit/a0a5d52cdd06739992944126a89fe58daf4ee605))
+
## [2.27.0](https://github.com/googleapis/java-bigquerystorage/compare/v2.26.0...v2.27.0) (2022-12-12)
diff --git a/README.md b/README.md
index 9b46ae61fb..cea68e863c 100644
--- a/README.md
+++ b/README.md
@@ -41,7 +41,7 @@ If you are using Maven without BOM, add this to your dependencies:
com.google.cloud
google-cloud-bigquerystorage
- 2.27.0
+ 2.26.0
```
diff --git a/google-cloud-bigquerystorage-bom/pom.xml b/google-cloud-bigquerystorage-bom/pom.xml
index b1d7d1a570..6e0e8eb85b 100644
--- a/google-cloud-bigquerystorage-bom/pom.xml
+++ b/google-cloud-bigquerystorage-bom/pom.xml
@@ -3,7 +3,7 @@
4.0.0
com.google.cloud
google-cloud-bigquerystorage-bom
- 2.27.1-SNAPSHOT
+ 2.28.0
pom
com.google.cloud
@@ -52,37 +52,37 @@
com.google.cloud
google-cloud-bigquerystorage
- 2.27.1-SNAPSHOT
+ 2.28.0
com.google.api.grpc
grpc-google-cloud-bigquerystorage-v1beta1
- 0.151.1-SNAPSHOT
+ 0.152.0
com.google.api.grpc
grpc-google-cloud-bigquerystorage-v1beta2
- 0.151.1-SNAPSHOT
+ 0.152.0
com.google.api.grpc
grpc-google-cloud-bigquerystorage-v1
- 2.27.1-SNAPSHOT
+ 2.28.0
com.google.api.grpc
proto-google-cloud-bigquerystorage-v1beta1
- 0.151.1-SNAPSHOT
+ 0.152.0
com.google.api.grpc
proto-google-cloud-bigquerystorage-v1beta2
- 0.151.1-SNAPSHOT
+ 0.152.0
com.google.api.grpc
proto-google-cloud-bigquerystorage-v1
- 2.27.1-SNAPSHOT
+ 2.28.0