Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Update scalafmt-core to 2.7.5 #1123

Open
wants to merge 3 commits into
base: master
Choose a base branch
from
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
2 changes: 2 additions & 0 deletions .git-blame-ignore-revs
Original file line number Diff line number Diff line change
@@ -0,0 +1,2 @@
# Scala Steward: Reformat with scalafmt 2.7.5
80b323e2849c43beedc7bfead21b02ea3c1bb087
2 changes: 1 addition & 1 deletion .scalafmt.conf
Original file line number Diff line number Diff line change
@@ -1,4 +1,4 @@
version=2.6.4
version=2.7.5
maxColumn = 120
preset = IntelliJ
align.preset = most
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -5,8 +5,7 @@ import com.typesafe.scalalogging.LazyLogging
import org.apache.hadoop.io.compress.CompressionCodec
import org.tukaani.xz.FilterOptions

/**
* We don't ship every single compression codec jar with the connector as that would make it bloated. For running the tests for certain codecs we can make the additional jars available to the container.
/** We don't ship every single compression codec jar with the connector as that would make it bloated. For running the tests for certain codecs we can make the additional jars available to the container.
*/
object ProvidedJars extends LazyLogging {

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -199,8 +199,7 @@ abstract class CoreSinkTaskTestCases[

}

/**
* The file sizes of the 3 records above come out as 44,46,44.
/** The file sizes of the 3 records above come out as 44,46,44.
* We're going to set the threshold to 80 - so once we've written 2 records to a file then the
* second file should only contain a single record. This second file won't have been written yet.
*/
Expand Down Expand Up @@ -260,8 +259,7 @@ abstract class CoreSinkTaskTestCases[
task
}

/**
* The difference in this test is that the sink is opened again, which will cause the offsets to be copied to the
/** The difference in this test is that the sink is opened again, which will cause the offsets to be copied to the
* context
*/
unitUnderTest should "put existing offsets to the context" in {
Expand Down Expand Up @@ -583,8 +581,7 @@ abstract class CoreSinkTaskTestCases[

}

/**
* As soon as one file is eligible for writing, it will write all those from the same topic partition. Therefore 4
/** As soon as one file is eligible for writing, it will write all those from the same topic partition. Therefore 4
* files are written instead of 2, as there are 2 points at which the write is triggered and the half-full files must
* be written as well as those reaching the threshold.
*/
Expand Down Expand Up @@ -1125,8 +1122,7 @@ abstract class CoreSinkTaskTestCases[
)
}

/**
* This should write partition 1 but not partition 0
/** This should write partition 1 but not partition 0
*/
unitUnderTest should "write multiple partitions independently" in {

Expand Down Expand Up @@ -1555,8 +1551,7 @@ abstract class CoreSinkTaskTestCases[
),
)

/**
* This should write partition 1 but not partition 0
/** This should write partition 1 but not partition 0
*/
unitUnderTest should "partition by nested value fields" in {

Expand Down Expand Up @@ -1594,8 +1589,7 @@ abstract class CoreSinkTaskTestCases[
)
}

/**
* This should write partition 1 but not partition 0
/** This should write partition 1 but not partition 0
*/
unitUnderTest should "partition by nested key fields" in {

Expand Down Expand Up @@ -1689,8 +1683,7 @@ abstract class CoreSinkTaskTestCases[
)
}

/**
* This should write partition 1 but not partition 0
/** This should write partition 1 but not partition 0
*/
unitUnderTest should "partition by nested header fields" in {

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -65,8 +65,7 @@ object S3ObjectKey {

}

/**
* Validates a prefix is valid. It should not start and end with /.
/** Validates a prefix is valid. It should not start and end with /.
* Allows 0-9,a-z,A-Z,!, - , _, ., *, ', ), (, and /.
* Does not start and end with /
*
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -30,11 +30,10 @@ class AwsS3DirectoryLister(connectorTaskId: ConnectorTaskId, s3Client: S3Client)
extends LazyLogging
with DirectoryLister {

private val listObjectsF: ListObjectsV2Request => IO[Iterator[ListObjectsV2Response]] = e =>
IO(s3Client.listObjectsV2Paginator(e).iterator().asScala)
private val listObjectsF: ListObjectsV2Request => IO[Iterator[ListObjectsV2Response]] =
e => IO(s3Client.listObjectsV2Paginator(e).iterator().asScala)

/**
* @param wildcardExcludes allows ignoring paths containing certain strings. Mainly it is used to prevent us from reading anything inside the .indexes key prefix, as these should be ignored by the source.
/** @param wildcardExcludes allows ignoring paths containing certain strings. Mainly it is used to prevent us from reading anything inside the .indexes key prefix, as these should be ignored by the source.
*/
override def findDirectories(
bucketAndPrefix: CloudLocation,
Expand Down Expand Up @@ -67,8 +66,7 @@ class AwsS3DirectoryLister(connectorTaskId: ConnectorTaskId, s3Client: S3Client)
listObjectsF(createListObjectsRequest(bucketAndPrefix))
}

/**
* @param wildcardExcludes allows ignoring paths containing certain strings. Mainly it is used to prevent us from reading anything inside the .indexes key prefix, as these should be ignored by the source.
/** @param wildcardExcludes allows ignoring paths containing certain strings. Mainly it is used to prevent us from reading anything inside the .indexes key prefix, as these should be ignored by the source.
*/
private def flattenPrefixes(
bucketAndPrefix: CloudLocation,
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -279,8 +279,7 @@ class AwsS3StorageInterface(connectorTaskId: ConnectorTaskId, s3Client: S3Client
.map(lmValue => S3FileMetadata(fileName, lmValue))
.orElse(getMetadata(bucket, fileName).map(oMeta => S3FileMetadata(fileName, oMeta.lastModified)).toOption)

/**
* Gets the system name for use in log messages.
/** Gets the system name for use in log messages.
*
* @return
*/
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -19,8 +19,7 @@ import cats.data.Validated
import io.lenses.streamreactor.connect.cloud.common.model.location.CloudLocation
import io.lenses.streamreactor.connect.cloud.common.model.location.CloudLocationValidator

/**
* This is a best-efforts validator for Datalake Container names. It won't validate DNS, ownership etc but it will allow the sink to fail fast in case validation fails on the broad rules.
/** This is a best-efforts validator for Datalake Container names. It won't validate DNS, ownership etc but it will allow the sink to fail fast in case validation fails on the broad rules.
*/
object DatalakeLocationValidator extends CloudLocationValidator {

Expand All @@ -33,8 +32,7 @@ object DatalakeLocationValidator extends CloudLocationValidator {
} yield location,
)

/**
* From [[https://learn.microsoft.com/en-us/rest/api/storageservices/naming-and-referencing-containers--blobs--and-metadata Microsoft Datalake Docs]]
/** From [[https://learn.microsoft.com/en-us/rest/api/storageservices/naming-and-referencing-containers--blobs--and-metadata Microsoft Datalake Docs]]
* A container name must be a valid DNS name, conforming to the following naming rules:
* <ul>
* <li>Container names must start or end with a letter or number, and can contain only letters, numbers, and the hyphen/minus (-) character.</li>
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -221,8 +221,7 @@ class DatalakeStorageInterface(connectorTaskId: ConnectorTaskId, client: DataLak
}.sequence
} yield ()

/**
* Gets the system name for use in log messages.
/** Gets the system name for use in log messages.
*
* @return
*/
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -49,19 +49,21 @@ object DatalakeContinuingPageIterableAdaptor {
val entries: Seq[PathItem] = pgValue.toSeq
val existsOnPage =
lastFilename.map(lastFileName => entries.indexWhere(_.getName == lastFileName)).getOrElse(-1)
val results = if (existsOnPage > -1) {
// skip until we get to the one we're interested in
entries.splitAt(existsOnPage + 1)._2
} else {
entries
}
val results =
if (existsOnPage > -1) {
// skip until we get to the one we're interested in
entries.splitAt(existsOnPage + 1)._2
} else {
entries
}
val soFar = results.take(numResults)
val exhausted = (results.size - numResults) == 0
val pgToken: Option[String] = if (exhausted) {
page.getContinuationToken.some
} else {
continuationToken
}
val pgToken: Option[String] =
if (exhausted) {
page.getContinuationToken.some
} else {
continuationToken
}

if (soFar.size < numResults) {
loop(pagedIter,
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -20,8 +20,7 @@ import com.microsoft.azure.documentdb.ConnectionPolicy
import com.microsoft.azure.documentdb.DocumentClient
import org.apache.http.HttpHost

/**
* Creates an instance of Azure DocumentClient class
/** Creates an instance of Azure DocumentClient class
*/
object DocumentClientProvider {
def get(settings: DocumentDbSinkSettings): DocumentClient = {
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -23,8 +23,7 @@ import io.lenses.streamreactor.common.config.base.const.TraitConfigConst.MAX_RET
import io.lenses.streamreactor.common.config.base.const.TraitConfigConst.PROGRESS_ENABLED_CONST
import io.lenses.streamreactor.common.config.base.const.TraitConfigConst.RETRY_INTERVAL_PROP_SUFFIX

/**
* Holds the constants used in the config.
/** Holds the constants used in the config.
*/

object DocumentDbConfigConstants {
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -36,16 +36,14 @@ object SinkRecordConverter {

ISO_DATE_FORMAT.setTimeZone(TimeZone.getTimeZone("UTC"))

/**
* Creates a Azure Document Db document from a HashMap
/** Creates a Azure Document Db document from a HashMap
*
* @param map
* @return
*/
def fromMap(map: util.Map[String, AnyRef]): Document = new Document(Json.toJson(map))

/**
* Creates an Azure DocumentDb document from a the Kafka Struct
/** Creates an Azure DocumentDb document from a the Kafka Struct
*
* @param record
* @return
Expand Down Expand Up @@ -177,8 +175,7 @@ object SinkRecordConverter {
}
}

/**
* Creates an Azure Document DB document from Json
/** Creates an Azure Document DB document from Json
*
* @param record - The instance to the json node
* @return An instance of a Azure Document DB document
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -39,8 +39,7 @@ import scala.util.Failure
import scala.util.Success
import scala.util.Try

/**
* <h1>DocumentDbSinkConnector</h1>
/** <h1>DocumentDbSinkConnector</h1>
* Kafka Connect Azure DocumentDb Sink connector
*
* Sets up DocumentDbSinkTask and configurations for the tasks.
Expand All @@ -53,13 +52,11 @@ class DocumentDbSinkConnector private[sink] (builder: DocumentDbSinkSettings =>

def this() = this(DocumentClientProvider.get)

/**
* States which SinkTask class to use
/** States which SinkTask class to use
*/
override def taskClass(): Class[_ <: Task] = classOf[DocumentDbSinkTask]

/**
* Set the configuration for each work and determine the split
/** Set the configuration for each work and determine the split
*
* @param maxTasks The max number of task workers be can spawn
* @return a List of configuration properties per worker
Expand All @@ -84,8 +81,7 @@ class DocumentDbSinkConnector private[sink] (builder: DocumentDbSinkSettings =>
}
}

/**
* Start the sink and set to configuration
/** Start the sink and set to configuration
*
* @param props A map of properties for the connector and worker
*/
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -33,8 +33,7 @@ import org.apache.kafka.connect.sink.SinkTaskContext
import scala.annotation.nowarn
import scala.util.Failure

/**
* <h1>DocumentDbWriter</h1>
/** <h1>DocumentDbWriter</h1>
* Azure DocumentDb Json writer for Kafka connect
* Writes a list of Kafka connect sink records to Azure DocumentDb using the JSON support.
*/
Expand All @@ -49,8 +48,7 @@ class DocumentDbWriter(configMap: Map[String, Kcql], settings: DocumentDbSinkSet
private val requestOptionsInsert = new RequestOptions
requestOptionsInsert.setConsistencyLevel(settings.consistency)

/**
* Write SinkRecords to Azure Document Db.
/** Write SinkRecords to Azure Document Db.
*
* @param records A list of SinkRecords from Kafka Connect to write.
*/
Expand All @@ -59,8 +57,7 @@ class DocumentDbWriter(configMap: Map[String, Kcql], settings: DocumentDbSinkSet
val _ = insert(records)
}

/**
* Write SinkRecords to Azure Document Db
/** Write SinkRecords to Azure Document Db
*
* @param records A list of SinkRecords from Kafka Connect to write.
* @return boolean indication successful write.
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -22,14 +22,12 @@ import java.util.Collections
import scala.jdk.CollectionConverters.ListHasAsScala
import scala.jdk.CollectionConverters.SeqHasAsJava

/**
* Created by [email protected] on 25/04/16.
/** Created by [email protected] on 25/04/16.
* stream-reactor
*/
object OffsetHandler {

/**
* Recover the offsets
/** Recover the offsets
*
* @param lookupPartitionKey A partition key for the offset map
* @param sourcePartition A list of datasets .i.e tables to get the partition offsets for
Expand All @@ -45,8 +43,7 @@ object OffsetHandler {
context.offsetStorageReader().offsets(partitions)
}

/**
* Returns a last stored offset for the partitionKeyValue
/** Returns a last stored offset for the partitionKeyValue
*
* @param offsets The offsets to search through.
* @param lookupPartitionKey The key for this partition .i.e. cassandra.assigned.tables.
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -20,13 +20,11 @@ import com.typesafe.scalalogging.StrictLogging
import java.util
import java.util.concurrent.LinkedBlockingQueue

/**
* Created by r on 3/1/16.
/** Created by r on 3/1/16.
*/
object QueueHelpers extends StrictLogging {

/**
* Drain the queue
/** Drain the queue
*
* @param queue The queue to drain
* @param batchSize Batch size to take
Expand Down
Loading
Loading