Compare commits

..

19 Commits

Author SHA1 Message Date
6543
6839010bd6 Changelog v1.12.0 (#11927)
* merge RC-logs

* Update

* Update CHANGELOG.md

Co-authored-by: techknowlogick <techknowlogick@gitea.io>
2020-06-18 11:54:33 -04:00
6543
80da796025 Changelog v1.11.7 (#11953) (#11955)
* Changelog v1.11.7

* Update CHANGELOG.md
2020-06-18 11:44:35 -04:00
6543
113c99512b Fix commenting on non-utf8 encoded files (#11916) (#11950)
* Add comment on non-unicode line to force fail

Signed-off-by: Andrew Thornton <art27@cantab.net>

* Just quote/unquote patch

Signed-off-by: Andrew Thornton <art27@cantab.net>

Co-authored-by: zeripath <art27@cantab.net>
2020-06-18 18:22:43 +03:00
Lunny Xiao
82343f4943 Use google/uuid to instead satori/go.uuid (#11943) (#11946)
Co-authored-by: Lauris BH <lauris@nix.lv>

Co-authored-by: Lauris BH <lauris@nix.lv>
Co-authored-by: zeripath <art27@cantab.net>
2020-06-18 10:06:48 -04:00
Cirno the Strongest
d534007bc4 Align show/hide outdated button on code review block (#11932) (#11944)
Co-authored-by: Lauris BH <lauris@nix.lv>
Co-authored-by: techknowlogick <techknowlogick@gitea.io>
(cherry picked from commit 6c38f371ea)
2020-06-18 17:32:36 +08:00
6543
6466053b4d [Backport] Update to go-git v5.1.0 (#11936) (#11941)
* update go-git 5.0.0 -> v5.1.0

* vendor

Co-authored-by: techknowlogick <techknowlogick@gitea.io>
2020-06-18 09:05:30 +08:00
techknowlogick
7dc8db9ea8 Global default branch setting (#11918) (#11937)
* Global default branch setting (#11918)

* Global default branch setting

* add to app.ini example per @silverwind

* update per @lunny

Co-authored-by: John Olheiser <john.olheiser@gmail.com>

* Update modules/setting/repository.go

Co-authored-by: John Olheiser <john.olheiser@gmail.com>
2020-06-17 19:32:06 -04:00
6543
ecad970a26 Use ID or Where to instead directly use Get when load object from database (#11925) (#11934)
Backport #11925

Use ID or Where to instead directly use Get when load object from database

Co-authored-by: Lunny Xiao <xiaolunwen@gmail.com>
2020-06-17 20:53:43 +01:00
6543
47a5c8e1f7 Update CommitsAhead CommitsBehind on Pull BaseBranch Change too (#11912) (#11915)
* Update CommitsAhead CommitsBehind on Pull BaseBranch Change too (#11912)

* CI.restart()
2020-06-16 15:56:47 -04:00
zeripath
6abb8d751c Invalidate comments when file is shortened (#11882) (#11884)
Backport #11882

Fix #10686

Signed-off-by: Andrew Thornton <art27@cantab.net>
2020-06-15 13:26:30 -04:00
Cirno the Strongest
fdc6287973 Rework api/user/repos for pagination (#11827) (#11877)
* Add count to `GetUserRepositories` so that pagination can be supported for `/user/{username}/repos`
* Rework ListMyRepos to use models.SearchRepository

ListMyRepos was an odd one. It first fetched all user repositories and then tried to supplement them with accessible map. The end result was that:

* Limit for pagination did not work because accessible repos would always be appended
* The amount of pages was incorrect if one were to calculate it
* When paginating, all accessible repos would be shown on every page

Hopefully it should now work properly. Fixes #11800 and does not require any change on Drone-side as it can properly interpret and act on Link header which we now set.

Co-authored-by: Lauris BH <lauris@nix.lv>
(cherry picked from commit 0159851cc3)
2020-06-13 18:35:13 +01:00
zeripath
320031fce6 Handle more pathological branch and tag names (#11843) (#11863)
Backport #11843

It's possible to push quite pathological appearing branch names to gitea
using git push gitea reasonable-branch:refs/heads/-- at which point
large parts of the UI will break. Similarly you can git push origin
reasonable-tag:refs/tags/-- which wil return an error.

This PR fixes the problems these cause. It also changes the code from
creating branches to pushing to ensure that branch restoration has to
pass hooks.

Signed-off-by: Andrew Thornton <art27@cantab.net>

Co-authored-by: techknowlogick <techknowlogick@gitea.io>
2020-06-12 14:01:44 -04:00
Cirno the Strongest
ef2f18964e Fix search form button overlap (#11840) (#11864)
Co-authored-by: techknowlogick <techknowlogick@gitea.io>
(cherry picked from commit 8770bceafa)
2020-06-12 13:23:13 +01:00
zeripath
f2bde40804 Add doctor check to set IsArchived false if it is null (partial backport #11853) (#11859)
Partial backport of #11853

Add doctor check to set IsArchived false if it is null.

(Migration change unfortunately not possible to be backported.)

Fix #11824

Signed-off-by: Andrew Thornton <art27@cantab.net>
2020-06-11 17:08:13 -04:00
zeripath
6b1e5f7f88 Prevent panic on empty HOST for mysql (#11850) (#11856)
Backport #11850

Signed-off-by: Andrew Thornton <art27@cantab.net>
2020-06-11 14:27:59 -04:00
Cirno the Strongest
56660c3fd0 Use DEFAULT_PAGING_NUM instead of MAX_RESPONSE_ITEMS in ListOptions (#11831) (#11836)
Co-authored-by: techknowlogick <techknowlogick@gitea.io>
(cherry picked from commit 2b2b3e4c37)
2020-06-10 13:42:10 -04:00
John Olheiser
87a82138c6 Fix reply octicon (#11821) (#11822)
Signed-off-by: jolheiser <john.olheiser@gmail.com>
2020-06-09 12:25:32 -05:00
Cirno the Strongest
d06f98d9a2 Honor DEFAULT_PAGING_NUM for API (#11805) (#11813)
* Honor DEFAULT_PAGING_NUM for API

* set pagination to 10 for tests

* lint

Co-authored-by: Lauris BH <lauris@nix.lv>
Co-authored-by: techknowlogick <techknowlogick@gitea.io>
(cherry picked from commit cefbf73aea)
2020-06-09 16:05:21 +03:00
zeripath
c52f81eecc Ensure rejected push to refs/pull/index/head fails nicely (#11724) (#11809)
Backport #11724

A pre-receive hook that rejects pushes to refs/pull/index/head
will cause a broken PR which causes an internal server error
whenever it is viewed. This PR handles prevents the internal server
error by handling non-existent pr heads and sends a flash error
informing the creator there was a problem.

Signed-off-by: Andrew Thornton <art27@cantab.net>
2020-06-08 20:00:12 +01:00
156 changed files with 4456 additions and 1615 deletions

View File

@@ -4,61 +4,7 @@ This changelog goes through all the changes that have been made in each release
without substantial changes to our git log; to see the highlights of what has
been added to each release, please refer to the [blog](https://blog.gitea.io).
## [1.12.0-rc2](https://github.com/go-gitea/gitea/releases/tag/v1.12.0-rc2) - 2020-06-08
* BUGFIXES
* In File Create/Update API return 404 if Branch does not exist (#11791) (#11795)
* Fix doer of rename repo (#11789) (#11794)
* Initialize SimpleMDE when making a code comment (#11749) (#11785)
* Fix timezone on issue deadline (#11697) (#11784)
* Fix to allow comment poster to edit or delete his own comments (#11671) (#11774)
* Show full 500 error in API when Gitea in dev mode (#11641) (#11753)
* Add missing templates for Matrix system webhooks (#11729) (#11748)
* Fix verification of subkeys of default gpg key (#11713) (#11747)
* Fix styling for commiter on diff view (#11715) (#11744)
* Properly truncate system notices (#11714) (#11742)
* Handle expected errors in FileCreate & FileUpdate API (#11643) (#11718)
* Fix missing authorization check on pull for public repos of private/limited org (#11656) (#11682)
* Update emoji regex (#11584) (#11679)
* Doctor check & fix db consistency (#11111) (#11676)
* Default MSSQL port 0 to allow automatic detection by default (#11642) (#11673)
* Exclude generated files from language statistics (#11653) (#11670)
* Use -1 to disable key algorithm type in ssh.minimum_key_sizes (#11635) (#11662)
* Return json on 500 error from API (#11574) (#11659)
* When must change password only show Signout (#11600) (#11637)
* Backport various styling fixes (#11619)
* Fix wrong milestone in webhook message (#11596) (#11611)
* Fix serviceworker output file and misc improvements (#11562) (#11610)
* When initialising repositories ensure that the user doing the creation is the initializer (#11601) (#11608)
* Prevent empty query parameter being set on dashboard (#11561) (#11604)
* Fix images in wiki edit preview (#11546) (#11602)
* Allow different HardBreaks settings for documents and comments (#11515) (#11599)
* Prevent (caught) panic on login (#11590) (#11597)
* Prevent transferring repos to invisible orgs (#11517) (#11549)
* Move serviceworker to workbox and fix SSE interference (#11538) (#11547)
* API PullReviewComment HTMLPullURL should return the HTMLURL (#11501) (#11533)
* Fix repo-list private and total count bugs (#11500) (#11532)
* Fix form action template substitutions on admin pages (backport #11519) (#11531)
* Fix a bug where the reaction emoji doesn't disappear. (#11489) (#11530)
* TrimSpace when reading InternalToken from a file (#11502) (#11524)
* Fix selected line color in arc-green (#11492) (#11520)
* Make localstorage read ssh or https correctly (#11483) (#11490)
* ENHANCEMENTS
* Make tabular menu styling consistent for arc-green (#11570) (#11798)
* Add option to API to update PullRequest base branch (#11666) (#11796)
* Increase maximum SQLite variables count to 32766 (#11696) (#11783)
* Update emoji dataset with skin tone variants (#11678) (#11763)
* Add logging to long migrations (#11647) (#11691)
* Change language statistics to save size instead of percentage (#11681) (#11690)
* Fix alignment for commits on dashboard (#11595) (#11680)
* Handle expected errors in AddGPGkey API (#11644) (#11661)
* Close EventSource before unloading the page (#11539) (#11557)
* Ensure emoji render with regular font-weight (#11541) (#11545)
* Fix webpack chunk loading with STATIC_URL_PREFIX (#11526) (#11542)
* Tweak reaction buttons (#11516)
* Use more toned colors for selected line (#11493) (#11511)
## [1.12.0-rc1](https://github.com/go-gitea/gitea/releases/tag/v1.12.0-rc1) - 2020-05-18
## [1.12.0](https://github.com/go-gitea/gitea/releases/tag/v1.12.0) - 2020-06-17
* BREAKING
* When using API CreateRelease set created_unix to the tag commit time (#11218)
@@ -68,6 +14,8 @@ been added to each release, please refer to the [blog](https://blog.gitea.io).
* Return 404 from Contents API when items don't exist (#10323)
* Notification API should always return a JSON object with the current count of notifications (#10059)
* Remove migration support from versions earlier than 1.6.0 (#10026)
* SECURITY
* Use -1 to disable key algorithm type in ssh.minimum_key_sizes (#11635) (#11662)
* FEATURES
* Improve config logging when WrappedQueue times out (#11174)
* Add branch delete to API (#11112)
@@ -109,6 +57,53 @@ been added to each release, please refer to the [blog](https://blog.gitea.io).
* Language statistics bar for repositories (#8037)
* Restricted users (#6274)
* BUGFIXES
* Fix commenting on non-utf8 encoded files (#11916) (#11950)
* Use google/uuid to instead satori/go.uuid (#11943) (#11946)
* Align show/hide outdated button on code review block (#11932) (#11944)
* Update to go-git v5.1.0 (#11936) (#11941)
* Use ID or Where to instead directly use Get when load object from database (#11925) (#11934)
* Update CommitsAhead CommitsBehind on Pull BaseBranch Change too (#11912) (#11915)
* Invalidate comments when file is shortened (#11882) (#11884)
* Rework api/user/repos for pagination (#11827) (#11877)
* Handle more pathological branch and tag names (#11843) (#11863)
* Add doctor check to set IsArchived false if it is null (partial #11853) (#11859)
* Prevent panic on empty HOST for mysql (#11850) (#11856)
* Use DEFAULT_PAGING_NUM instead of MAX_RESPONSE_ITEMS in ListOptions (#11831) (#11836)
* Fix reply octicon (#11821) (#11822)
* Honor DEFAULT_PAGING_NUM for API (#11805) (#11813)
* Ensure rejected push to refs/pull/index/head fails nicely (#11724) (#11809)
* In File Create/Update API return 404 if Branch does not exist (#11791) (#11795)
* Fix doer of rename repo (#11789) (#11794)
* Initialize SimpleMDE when making a code comment (#11749) (#11785)
* Fix timezone on issue deadline (#11697) (#11784)
* Fix to allow comment poster to edit or delete his own comments (#11671) (#11774)
* Show full 500 error in API when Gitea in dev mode (#11641) (#11753)
* Add missing templates for Matrix system webhooks (#11729) (#11748)
* Fix verification of subkeys of default gpg key (#11713) (#11747)
* Fix styling for commiter on diff view (#11715) (#11744)
* Properly truncate system notices (#11714) (#11742)
* Handle expected errors in FileCreate & FileUpdate API (#11643) (#11718)
* Fix missing authorization check on pull for public repos of private/limited org (#11656) (#11682)
* Doctor check & fix db consistency (#11111) (#11676)
* Exclude generated files from language statistics (#11653) (#11670)
* Return json on 500 error from API (#11574) (#11659)
* When must change password only show Signout (#11600) (#11637)
* Backport various styling fixes (#11619)
* Fix wrong milestone in webhook message (#11596) (#11611)
* Fix serviceworker output file and misc improvements (#11562) (#11610)
* When initialising repositories ensure that the user doing the creation is the initializer (#11601) (#11608)
* Prevent empty query parameter being set on dashboard (#11561) (#11604)
* Fix images in wiki edit preview (#11546) (#11602)
* Prevent (caught) panic on login (#11590) (#11597)
* Prevent transferring repos to invisible orgs (#11517) (#11549)
* Move serviceworker to workbox and fix SSE interference (#11538) (#11547)
* API PullReviewComment HTMLPullURL should return the HTMLURL (#11501) (#11533)
* Fix repo-list private and total count bugs (#11500) (#11532)
* Fix form action template substitutions on admin pages (backport #11519) (#11531)
* Fix a bug where the reaction emoji doesn't disappear. (#11489) (#11530)
* TrimSpace when reading InternalToken from a file (#11502) (#11524)
* Fix selected line color in arc-green (#11492) (#11520)
* Make localstorage read ssh or https correctly (#11483) (#11490)
* Check branch protection on IsUserAllowedToUpdate (#11448)
* Fix margin on attached segment headers when they are separated by other element (#11425)
* Fix webhook template when validation errors occur (#11421)
@@ -176,6 +171,22 @@ been added to each release, please refer to the [blog](https://blog.gitea.io).
* Fix wrong original git service type on a migrated repository (#9693)
* Fix ref links in issue overviews for tags (#8742)
* ENHANCEMENTS
* Fix search form button overlap (#11840) (#11864)
* Make tabular menu styling consistent for arc-green (#11570) (#11798)
* Add option to API to update PullRequest base branch (#11666) (#11796)
* Increase maximum SQLite variables count to 32766 (#11696) (#11783)
* Update emoji dataset with skin tone variants (#11678) (#11763)
* Add logging to long migrations (#11647) (#11691)
* Change language statistics to save size instead of percentage (#11681) (#11690)
* Allow different HardBreaks settings for documents and comments (#11515) (#11599)
* Fix alignment for commits on dashboard (#11595) (#11680)
* Default MSSQL port 0 to allow automatic detection by default (#11642) (#11673)
* Handle expected errors in AddGPGkey API (#11644) (#11661)
* Close EventSource before unloading the page (#11539) (#11557)
* Ensure emoji render with regular font-weight (#11541) (#11545)
* Fix webpack chunk loading with STATIC_URL_PREFIX (#11526) (#11542)
* Tweak reaction buttons (#11516)
* Use more toned colors for selected line (#11493) (#11511)
* Increase width for authors on commit view (#11441)
* Hide archived repos by default in repo-list (#11440)
* Better styling for code review comment textarea (#11428)
@@ -338,6 +349,15 @@ been added to each release, please refer to the [blog](https://blog.gitea.io).
* Fix queue log param (#10733)
* Add warning when using relative path to app.ini (#10104)
## [1.11.7](https://github.com/go-gitea/gitea/releases/tag/v1.11.7) - 2020-06-18
* BUGFIXES
* Use ID or Where to instead directly use Get when load object from database (#11925) (#11935)
* Fix __webpack_public_path__ for 1.11 (#11907)
* Fix verification of subkeys of default gpg key (#11713) (#11902)
* Remove unnecessary parentheses in wiki/view template (#11781)
* Doctor fix xorm.Count nil on sqlite error (#11741)
## [1.11.6](https://github.com/go-gitea/gitea/releases/tag/v1.11.6) - 2020-05-30
* SECURITY

View File

@@ -574,6 +574,22 @@ func runDoctorCheckDBConsistency(ctx *cli.Context) ([]string, error) {
}
}
count, err = models.CountNullArchivedRepository()
if err != nil {
return nil, err
}
if count > 0 {
if ctx.Bool("fix") {
updatedCount, err := models.FixNullArchivedRepository()
if err != nil {
return nil, err
}
results = append(results, fmt.Sprintf("%d repositories with null is_archived updated", updatedCount))
} else {
results = append(results, fmt.Sprintf("%d repositories with null is_archived", count))
}
}
//ToDo: function to recalc all counters
return results, nil

View File

@@ -50,6 +50,8 @@ DISABLED_REPO_UNITS =
DEFAULT_REPO_UNITS = repo.code,repo.releases,repo.issues,repo.pulls,repo.wiki
; Prefix archive files by placing them in a directory named after the repository
PREFIX_ARCHIVE_FILES = true
; The default branch name of new repositories
DEFAULT_BRANCH=master
[repository.editor]
; List of file extensions for which lines should be wrapped in the Monaco editor

View File

@@ -69,6 +69,7 @@ Values containing `#` or `;` must be quoted using `` ` `` or `"""`.
- `ENABLE_PUSH_CREATE_USER`: **false**: Allow users to push local repositories to Gitea and have them automatically created for a user.
- `ENABLE_PUSH_CREATE_ORG`: **false**: Allow users to push local repositories to Gitea and have them automatically created for an org.
- `PREFIX_ARCHIVE_FILES`: **true**: Prefix archive files by placing them in a directory named after the repository.
- `DEFAULT_BRANCH`: **master**: Default branch name of all repositories.
### Repository - Pull Request (`repository.pull-request`)

10
go.mod
View File

@@ -39,7 +39,7 @@ require (
github.com/glycerine/go-unsnap-stream v0.0.0-20190901134440-81cf024a9e0a // indirect
github.com/go-enry/go-enry/v2 v2.5.2
github.com/go-git/go-billy/v5 v5.0.0
github.com/go-git/go-git/v5 v5.0.0
github.com/go-git/go-git/v5 v5.1.0
github.com/go-openapi/jsonreference v0.19.3 // indirect
github.com/go-redis/redis v6.15.2+incompatible
github.com/go-sql-driver/mysql v1.4.1
@@ -49,6 +49,7 @@ require (
github.com/gogs/cron v0.0.0-20171120032916-9f6c956d3e14
github.com/golang/protobuf v1.4.1 // indirect
github.com/google/go-github/v24 v24.0.1
github.com/google/uuid v1.1.1
github.com/gorilla/context v1.1.1
github.com/hashicorp/go-retryablehttp v0.6.6 // indirect
github.com/huandu/xstrings v1.3.0
@@ -85,7 +86,6 @@ require (
github.com/prometheus/procfs v0.0.4 // indirect
github.com/quasoft/websspi v1.0.0
github.com/remyoudompheng/bigfft v0.0.0-20190321074620-2f0d2b0e0001 // indirect
github.com/satori/go.uuid v1.2.0
github.com/sergi/go-diff v1.1.0
github.com/shurcooL/httpfs v0.0.0-20190527155220-6a4d4a70508b // indirect
github.com/shurcooL/vfsgen v0.0.0-20181202132449-6a9ea43bcacd
@@ -102,10 +102,10 @@ require (
github.com/yohcop/openid-go v1.0.0
github.com/yuin/goldmark v1.1.25
github.com/yuin/goldmark-meta v0.0.0-20191126180153-f0638e958b60
golang.org/x/crypto v0.0.0-20200429183012-4b2356b1ed79
golang.org/x/net v0.0.0-20200506145744-7e3656a0809f
golang.org/x/crypto v0.0.0-20200604202706-70a84ac30bf9
golang.org/x/net v0.0.0-20200602114024-627f9648deb9
golang.org/x/oauth2 v0.0.0-20200107190931-bf48bf16ab8d
golang.org/x/sys v0.0.0-20200509044756-6aff5f38e54f
golang.org/x/sys v0.0.0-20200615200032-f1bc736245b1
golang.org/x/text v0.3.2
golang.org/x/time v0.0.0-20200416051211-89c76fbcd5d1 // indirect
golang.org/x/tools v0.0.0-20200325010219-a49f79bcc224

20
go.sum
View File

@@ -203,8 +203,8 @@ github.com/go-git/go-billy/v5 v5.0.0 h1:7NQHvd9FVid8VL4qVUMm8XifBK+2xCoZ2lSk0agR
github.com/go-git/go-billy/v5 v5.0.0/go.mod h1:pmpqyWchKfYfrkb/UVH4otLvyi/5gJlGI4Hb3ZqZ3W0=
github.com/go-git/go-git-fixtures/v4 v4.0.1 h1:q+IFMfLx200Q3scvt2hN79JsEzy4AmBTp/pqnefH+Bc=
github.com/go-git/go-git-fixtures/v4 v4.0.1/go.mod h1:m+ICp2rF3jDhFgEZ/8yziagdT1C+ZpZcrJjappBCDSw=
github.com/go-git/go-git/v5 v5.0.0 h1:k5RWPm4iJwYtfWoxIJy4wJX9ON7ihPeZZYC1fLYDnpg=
github.com/go-git/go-git/v5 v5.0.0/go.mod h1:oYD8y9kWsGINPFJoLdaScGCN6dlKg23blmClfZwtUVA=
github.com/go-git/go-git/v5 v5.1.0 h1:HxJn9g/E7eYvKW3Fm7Jt4ee8LXfPOm/H1cdDu8vEssk=
github.com/go-git/go-git/v5 v5.1.0/go.mod h1:ZKfuPUoY1ZqIG4QG9BDBh3G4gLM5zvPuSJAozQrZuyM=
github.com/go-kit/kit v0.8.0/go.mod h1:xBxKIO96dXMWWy0MnWVtmwkA9/13aqxPnvrjFYMA2as=
github.com/go-logfmt/logfmt v0.3.0/go.mod h1:Qt1PoO58o5twSAckw1HlFXLmHsOX5/0LbT9GBnD5lWE=
github.com/go-logfmt/logfmt v0.4.0/go.mod h1:3RMwSq7FuexP4Kalkev3ejPJsZTpXXBr9+V4qmtdjCk=
@@ -371,6 +371,8 @@ github.com/hpcloud/tail v1.0.0 h1:nfCOvKYfkgYP8hkirhJocXT2+zOD8yUNjXaWfTlyFKI=
github.com/hpcloud/tail v1.0.0/go.mod h1:ab1qPbhIpdTxEkNHXyeSf5vhxWSCs/tWer42PpOxQnU=
github.com/huandu/xstrings v1.3.0 h1:gvV6jG9dTgFEncxo+AF7PH6MZXi/vZl25owA/8Dg8Wo=
github.com/huandu/xstrings v1.3.0/go.mod h1:y5/lhBue+AyNmUVz9RLU9xbLR0o4KIIExikq4ovT0aE=
github.com/imdario/mergo v0.3.9 h1:UauaLniWCFHWd+Jp9oCEkTBj8VO/9DKg3PV3VCNMDIg=
github.com/imdario/mergo v0.3.9/go.mod h1:2EnlNZ0deacrJVfApfmtdGgDfMuh/nq6Ok1EcJh5FfA=
github.com/inconshreveable/mousetrap v1.0.0/go.mod h1:PxqpIevigyE2G7u3NXJIT2ANytuPF1OarO4DADm73n8=
github.com/issue9/assert v1.3.1/go.mod h1:9Ger+iz8X7r1zMYYwEhh++2wMGWcNN2oVI+zIQXxcio=
github.com/issue9/assert v1.3.2 h1:IaTa37u4m1fUuTH9K9ldO5IONKVDXjLiUO1T9vj0OF0=
@@ -556,8 +558,6 @@ github.com/remyoudompheng/bigfft v0.0.0-20190321074620-2f0d2b0e0001/go.mod h1:qq
github.com/rogpeppe/fastuuid v0.0.0-20150106093220-6724a57986af/go.mod h1:XWv6SoW27p1b0cqNHllgS5HIMJraePCO15w5zCzIWYg=
github.com/russross/blackfriday v1.5.2 h1:HyvC0ARfnZBqnXwABFeSZHpKvJHJJfPz81GNueLj0oo=
github.com/russross/blackfriday v1.5.2/go.mod h1:JO/DiYxRf+HjHt06OyowR9PTA263kcR/rfWxYHBV53g=
github.com/satori/go.uuid v1.2.0 h1:0uYX9dsZ2yD7q2RtLRtPSdGDWzjeM3TbMJP9utgA0ww=
github.com/satori/go.uuid v1.2.0/go.mod h1:dA0hQrYB0VpLJoorglMZABFdXlWrHn1NEOzdhQKdks0=
github.com/sergi/go-diff v1.1.0 h1:we8PVUC3FE2uYfodKH/nBHMSetSfHDR6scGdBi+erh0=
github.com/sergi/go-diff v1.1.0/go.mod h1:STckp+ISIX8hZLjrqAeVduY0gWCT9IjLuqbuNXdaHfM=
github.com/shurcooL/httpfs v0.0.0-20190527155220-6a4d4a70508b h1:4kg1wyftSKxLtnPAvcRWakIPpokB9w780/KwrNLnfPA=
@@ -681,8 +681,8 @@ golang.org/x/crypto v0.0.0-20190927123631-a832865fa7ad/go.mod h1:yigFU9vqHzYiE8U
golang.org/x/crypto v0.0.0-20191011191535-87dc89f01550/go.mod h1:yigFU9vqHzYiE8UmvKecakEJjdnWj3jj499lnFckfCI=
golang.org/x/crypto v0.0.0-20200302210943-78000ba7a073 h1:xMPOj6Pz6UipU1wXLkrtqpHbR0AVFnyPEQq/wRWz9lM=
golang.org/x/crypto v0.0.0-20200302210943-78000ba7a073/go.mod h1:LzIPMQfyMNhhGPhUkYOs5KpL4U8rLKemX1yGLhDgUto=
golang.org/x/crypto v0.0.0-20200429183012-4b2356b1ed79 h1:IaQbIIB2X/Mp/DKctl6ROxz1KyMlKp4uyvL6+kQ7C88=
golang.org/x/crypto v0.0.0-20200429183012-4b2356b1ed79/go.mod h1:LzIPMQfyMNhhGPhUkYOs5KpL4U8rLKemX1yGLhDgUto=
golang.org/x/crypto v0.0.0-20200604202706-70a84ac30bf9 h1:vEg9joUBmeBcK9iSJftGNf3coIG4HqZElCPehJsfAYM=
golang.org/x/crypto v0.0.0-20200604202706-70a84ac30bf9/go.mod h1:LzIPMQfyMNhhGPhUkYOs5KpL4U8rLKemX1yGLhDgUto=
golang.org/x/exp v0.0.0-20190121172915-509febef88a4/go.mod h1:CJ0aWSM057203Lf6IL+f9T1iT9GByDxfZKAQTCR3kQA=
golang.org/x/exp v0.0.0-20190510132918-efd6b22b2522/go.mod h1:ZjyILWgesfNpC6sMxTJOJm9Kp84zZh5NQWvqDGG3Qr8=
golang.org/x/image v0.0.0-20190227222117-0694c2d4d067/go.mod h1:kZ7UVZpmo3dzQBMxlp+ypCbDeSB+sBbTgSJuh5dn5js=
@@ -721,8 +721,8 @@ golang.org/x/net v0.0.0-20190827160401-ba9fcec4b297/go.mod h1:z5CRVTTTmAJ677TzLL
golang.org/x/net v0.0.0-20200226121028-0de0cce0169b/go.mod h1:z5CRVTTTmAJ677TzLLGU+0bjPO0LkuOLi4/5GtJWs/s=
golang.org/x/net v0.0.0-20200301022130-244492dfa37a h1:GuSPYbZzB5/dcLNCwLQLsg3obCJtX9IJhpXkvY7kzk0=
golang.org/x/net v0.0.0-20200301022130-244492dfa37a/go.mod h1:z5CRVTTTmAJ677TzLLGU+0bjPO0LkuOLi4/5GtJWs/s=
golang.org/x/net v0.0.0-20200506145744-7e3656a0809f h1:QBjCr1Fz5kw158VqdE9JfI9cJnl/ymnJWAdMuinqL7Y=
golang.org/x/net v0.0.0-20200506145744-7e3656a0809f/go.mod h1:qpuaurCH72eLCgpAm/N6yyVIVM9cpaDIP3A8BGJEC5A=
golang.org/x/net v0.0.0-20200602114024-627f9648deb9 h1:pNX+40auqi2JqRfOP1akLGtYcn15TUbkhwuCO3foqqM=
golang.org/x/net v0.0.0-20200602114024-627f9648deb9/go.mod h1:qpuaurCH72eLCgpAm/N6yyVIVM9cpaDIP3A8BGJEC5A=
golang.org/x/oauth2 v0.0.0-20180620175406-ef147856a6dd/go.mod h1:N/0e6XlmueqKjAGxoOufVs8QHGRruUQn6yWY3a++T0U=
golang.org/x/oauth2 v0.0.0-20180821212333-d2e6202438be/go.mod h1:N/0e6XlmueqKjAGxoOufVs8QHGRruUQn6yWY3a++T0U=
golang.org/x/oauth2 v0.0.0-20181106182150-f42d05182288/go.mod h1:N/0e6XlmueqKjAGxoOufVs8QHGRruUQn6yWY3a++T0U=
@@ -769,8 +769,8 @@ golang.org/x/sys v0.0.0-20200302150141-5c8b2ff67527 h1:uYVVQ9WP/Ds2ROhcaGPeIdVq0
golang.org/x/sys v0.0.0-20200302150141-5c8b2ff67527/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20200323222414-85ca7c5b95cd h1:xhmwyvizuTgC2qz7ZlMluP20uW+C3Rm0FD/WLDX8884=
golang.org/x/sys v0.0.0-20200323222414-85ca7c5b95cd/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20200509044756-6aff5f38e54f h1:mOhmO9WsBaJCNmaZHPtHs9wOcdqdKCjF6OPJlmDM3KI=
golang.org/x/sys v0.0.0-20200509044756-6aff5f38e54f/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20200615200032-f1bc736245b1 h1:ogLJMz+qpzav7lGMh10LMvAkM/fAoGlaiiHYiFYdm80=
golang.org/x/sys v0.0.0-20200615200032-f1bc736245b1/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/text v0.3.0/go.mod h1:NqM8EUOU14njkJ3fqMW+pc6Ldnwhi/IjpwHt7yyuwOQ=
golang.org/x/text v0.3.1-0.20180807135948-17ff2d5776d2/go.mod h1:NqM8EUOU14njkJ3fqMW+pc6Ldnwhi/IjpwHt7yyuwOQ=
golang.org/x/text v0.3.2 h1:tW2bmiBqwgJj/UpqtC8EpXEZVYOwU0yG4iWbprSVAcs=

View File

@@ -86,6 +86,11 @@ func TestAPIPullReview(t *testing.T) {
Body: "first old line",
OldLineNum: 1,
NewLineNum: 0,
}, {
Path: "iso-8859-1.txt",
Body: "this line contains a non-utf-8 character",
OldLineNum: 0,
NewLineNum: 1,
},
},
})
@@ -93,7 +98,7 @@ func TestAPIPullReview(t *testing.T) {
DecodeJSON(t, resp, &review)
assert.EqualValues(t, 6, review.ID)
assert.EqualValues(t, "PENDING", review.State)
assert.EqualValues(t, 2, review.CodeCommentsCount)
assert.EqualValues(t, 3, review.CodeCommentsCount)
// test SubmitPullReview
req = NewRequestWithJSON(t, http.MethodPost, fmt.Sprintf("/api/v1/repos/%s/%s/pulls/%d/reviews/%d?token=%s", repo.OwnerName, repo.Name, pullIssue.Index, review.ID, token), &api.SubmitPullReviewOptions{
@@ -104,7 +109,7 @@ func TestAPIPullReview(t *testing.T) {
DecodeJSON(t, resp, &review)
assert.EqualValues(t, 6, review.ID)
assert.EqualValues(t, "APPROVED", review.State)
assert.EqualValues(t, 2, review.CodeCommentsCount)
assert.EqualValues(t, 3, review.CodeCommentsCount)
// test DeletePullReview
req = NewRequestWithJSON(t, http.MethodPost, fmt.Sprintf("/api/v1/repos/%s/%s/pulls/%d/reviews?token=%s", repo.OwnerName, repo.Name, pullIssue.Index, token), &api.CreatePullReviewOptions{

View File

@@ -13,6 +13,7 @@ import (
"testing"
"code.gitea.io/gitea/models"
"code.gitea.io/gitea/modules/setting"
api "code.gitea.io/gitea/modules/structs"
"github.com/stretchr/testify/assert"
@@ -57,6 +58,12 @@ func TestAPISearchRepo(t *testing.T) {
user4 := models.AssertExistsAndLoadBean(t, &models.User{ID: 20}).(*models.User)
orgUser := models.AssertExistsAndLoadBean(t, &models.User{ID: 17}).(*models.User)
oldAPIDefaultNum := setting.API.DefaultPagingNum
defer func() {
setting.API.DefaultPagingNum = oldAPIDefaultNum
}()
setting.API.DefaultPagingNum = 10
// Map of expected results, where key is user for login
type expectedResults map[*models.User]struct {
count int
@@ -79,7 +86,7 @@ func TestAPISearchRepo(t *testing.T) {
user: {count: 10},
user2: {count: 10}},
},
{name: "RepositoriesDefaultMax10", requestURL: "/api/v1/repos/search?default&private=false", expectedResults: expectedResults{
{name: "RepositoriesDefault", requestURL: "/api/v1/repos/search?default&private=false", expectedResults: expectedResults{
nil: {count: 10},
user: {count: 10},
user2: {count: 10}},

View File

@@ -32,14 +32,14 @@ func TestDeleteBranch(t *testing.T) {
}
func TestUndoDeleteBranch(t *testing.T) {
defer prepareTestEnv(t)()
deleteBranch(t)
htmlDoc, name := branchAction(t, ".undo-button")
assert.Contains(t,
htmlDoc.doc.Find(".ui.positive.message").Text(),
i18n.Tr("en", "repo.branch.restore_success", name),
)
onGiteaRun(t, func(t *testing.T, u *url.URL) {
deleteBranch(t)
htmlDoc, name := branchAction(t, ".undo-button")
assert.Contains(t,
htmlDoc.doc.Find(".ui.positive.message").Text(),
i18n.Tr("en", "repo.branch.restore_success", name),
)
})
}
func deleteBranch(t *testing.T) {

View File

@@ -0,0 +1,2 @@
xe<><65>N<EFBFBD>0D<>#<23><1F><03>4
J<EFBFBD>A<05>5<EFBFBD><35><EFBFBD><EFBFBD>,<2C>x<EFBFBD>zsV<73><56><EFBFBD>5<08>D<EFBFBD><44>ػ<EFBFBD>7<EFBFBD>,=<3D><>o.<13>E卢<45>q5J=<3D><><EFBFBD><EFBFBD><EFBFBD> r<>=>4<1B><1D>

View File

@@ -1 +1 @@
4a357436d925b5c974181ff12a994538ddc5a269
5f22f7d0d95d614d25a5b68592adb345a4b5c7fd

View File

@@ -10,6 +10,7 @@ import (
"testing"
"time"
"code.gitea.io/gitea/modules/setting"
"code.gitea.io/gitea/modules/test"
"github.com/stretchr/testify/assert"
@@ -106,6 +107,12 @@ func TestCreateReleaseDraft(t *testing.T) {
func TestCreateReleasePaging(t *testing.T) {
defer prepareTestEnv(t)()
oldAPIDefaultNum := setting.API.DefaultPagingNum
defer func() {
setting.API.DefaultPagingNum = oldAPIDefaultNum
}()
setting.API.DefaultPagingNum = 10
session := loginUser(t, "user2")
// Create enaugh releases to have paging
for i := 0; i < 12; i++ {

View File

@@ -14,7 +14,7 @@ import (
api "code.gitea.io/gitea/modules/structs"
"code.gitea.io/gitea/modules/timeutil"
gouuid "github.com/satori/go.uuid"
gouuid "github.com/google/uuid"
"xorm.io/xorm"
)
@@ -97,7 +97,7 @@ func (a *Attachment) LinkedRepository() (*Repository, UnitType, error) {
// NewAttachment creates a new attachment object.
func NewAttachment(attach *Attachment, buf []byte, file io.Reader) (_ *Attachment, err error) {
attach.UUID = gouuid.NewV4().String()
attach.UUID = gouuid.New().String()
localPath := attach.LocalPath()
if err = os.MkdirAll(path.Dir(localPath), os.ModePerm); err != nil {
@@ -136,9 +136,8 @@ func GetAttachmentByID(id int64) (*Attachment, error) {
}
func getAttachmentByID(e Engine, id int64) (*Attachment, error) {
attach := &Attachment{ID: id}
if has, err := e.Get(attach); err != nil {
attach := &Attachment{}
if has, err := e.ID(id).Get(attach); err != nil {
return nil, err
} else if !has {
return nil, ErrAttachmentNotExist{ID: id, UUID: ""}
@@ -147,8 +146,8 @@ func getAttachmentByID(e Engine, id int64) (*Attachment, error) {
}
func getAttachmentByUUID(e Engine, uuid string) (*Attachment, error) {
attach := &Attachment{UUID: uuid}
has, err := e.Get(attach)
attach := &Attachment{}
has, err := e.Where("uuid=?", uuid).Get(attach)
if err != nil {
return nil, err
} else if !has {

View File

@@ -240,8 +240,8 @@ func getProtectedBranchBy(e Engine, repoID int64, branchName string) (*Protected
// GetProtectedBranchByID getting protected branch by ID
func GetProtectedBranchByID(id int64) (*ProtectedBranch, error) {
rel := &ProtectedBranch{ID: id}
has, err := x.Get(rel)
rel := &ProtectedBranch{}
has, err := x.ID(id).Get(rel)
if err != nil {
return nil, err
}
@@ -509,9 +509,9 @@ func (repo *Repository) GetDeletedBranches() ([]*DeletedBranch, error) {
}
// GetDeletedBranchByID get a deleted branch by its ID
func (repo *Repository) GetDeletedBranchByID(ID int64) (*DeletedBranch, error) {
deletedBranch := &DeletedBranch{ID: ID}
has, err := x.Get(deletedBranch)
func (repo *Repository) GetDeletedBranchByID(id int64) (*DeletedBranch, error) {
deletedBranch := &DeletedBranch{}
has, err := x.ID(id).Get(deletedBranch)
if err != nil {
return nil, err
}

View File

@@ -283,3 +283,15 @@ func DeleteOrphanedObjects(subject, refobject, joinCond string) error {
Delete("`" + subject + "`")
return err
}
// CountNullArchivedRepository counts the number of repositories with is_archived is null
func CountNullArchivedRepository() (int64, error) {
return x.Where(builder.IsNull{"is_archived"}).Count(new(Repository))
}
// FixNullArchivedRepository sets is_archived to false where it is null
func FixNullArchivedRepository() (int64, error) {
return x.Where(builder.IsNull{"is_archived"}).Cols("is_archived").Update(&Repository{
IsArchived: false,
})
}

View File

@@ -8,7 +8,10 @@ package models
import (
"fmt"
"regexp"
"strconv"
"strings"
"unicode/utf8"
"code.gitea.io/gitea/modules/git"
"code.gitea.io/gitea/modules/log"
@@ -138,7 +141,8 @@ type Comment struct {
RenderedContent string `xorm:"-"`
// Path represents the 4 lines of code cemented by this comment
Patch string `xorm:"TEXT"`
Patch string `xorm:"-"`
PatchQuoted string `xorm:"TEXT patch"`
CreatedUnix timeutil.TimeStamp `xorm:"INDEX created"`
UpdatedUnix timeutil.TimeStamp `xorm:"INDEX updated"`
@@ -182,6 +186,33 @@ func (c *Comment) loadIssue(e Engine) (err error) {
return
}
// BeforeInsert will be invoked by XORM before inserting a record
func (c *Comment) BeforeInsert() {
c.PatchQuoted = c.Patch
if !utf8.ValidString(c.Patch) {
c.PatchQuoted = strconv.Quote(c.Patch)
}
}
// BeforeUpdate will be invoked by XORM before updating a record
func (c *Comment) BeforeUpdate() {
c.PatchQuoted = c.Patch
if !utf8.ValidString(c.Patch) {
c.PatchQuoted = strconv.Quote(c.Patch)
}
}
// AfterLoad is invoked from XORM after setting the values of all fields of this object.
func (c *Comment) AfterLoad(session *xorm.Session) {
c.Patch = c.PatchQuoted
if len(c.PatchQuoted) > 0 && c.PatchQuoted[0] == '"' {
unquoted, err := strconv.Unquote(c.PatchQuoted)
if err == nil {
c.Patch = unquoted
}
}
}
func (c *Comment) loadPoster(e Engine) (err error) {
if c.PosterID <= 0 || c.Poster != nil {
return nil
@@ -489,10 +520,12 @@ func (c *Comment) LoadReview() error {
return c.loadReview(x)
}
var notEnoughLines = regexp.MustCompile(`fatal: file .* has only \d+ lines?`)
func (c *Comment) checkInvalidation(doer *User, repo *git.Repository, branch string) error {
// FIXME differentiate between previous and proposed line
commit, err := repo.LineBlame(branch, repo.Path, c.TreePath, uint(c.UnsignedLine()))
if err != nil && strings.Contains(err.Error(), "fatal: no such path") {
if err != nil && (strings.Contains(err.Error(), "fatal: no such path") || notEnoughLines.MatchString(err.Error())) {
c.Invalidated = true
return UpdateComment(c, doer)
}

View File

@@ -295,10 +295,8 @@ func getLabelByID(e Engine, labelID int64) (*Label, error) {
return nil, ErrLabelNotExist{labelID}
}
l := &Label{
ID: labelID,
}
has, err := e.Get(l)
l := &Label{}
has, err := e.ID(labelID).Get(l)
if err != nil {
return nil, err
} else if !has {

View File

@@ -38,7 +38,10 @@ func (opts ListOptions) setEnginePagination(e Engine) Engine {
}
func (opts ListOptions) setDefaultValues() {
if opts.PageSize <= 0 || opts.PageSize > setting.API.MaxResponseItems {
if opts.PageSize <= 0 {
opts.PageSize = setting.API.DefaultPagingNum
}
if opts.PageSize > setting.API.MaxResponseItems {
opts.PageSize = setting.API.MaxResponseItems
}
if opts.Page <= 0 {

View File

@@ -300,7 +300,7 @@ func (source *LoginSource) SSPI() *SSPIConfig {
// CreateLoginSource inserts a LoginSource in the DB if not already
// existing with the given name.
func CreateLoginSource(source *LoginSource) error {
has, err := x.Get(&LoginSource{Name: source.Name})
has, err := x.Where("name=?", source.Name).Exist(new(LoginSource))
if err != nil {
return err
} else if has {

View File

@@ -16,7 +16,7 @@ import (
"code.gitea.io/gitea/modules/timeutil"
"github.com/dgrijalva/jwt-go"
uuid "github.com/satori/go.uuid"
uuid "github.com/google/uuid"
"github.com/unknwon/com"
"golang.org/x/crypto/bcrypt"
"xorm.io/xorm"
@@ -174,7 +174,7 @@ func CreateOAuth2Application(opts CreateOAuth2ApplicationOptions) (*OAuth2Applic
}
func createOAuth2Application(e Engine, opts CreateOAuth2ApplicationOptions) (*OAuth2Application, error) {
clientID := uuid.NewV4().String()
clientID := uuid.New().String()
app := &OAuth2Application{
UID: opts.UserID,
Name: opts.Name,

View File

@@ -35,6 +35,7 @@ import (
"code.gitea.io/gitea/modules/util"
"github.com/unknwon/com"
"xorm.io/builder"
)
var (
@@ -1754,22 +1755,28 @@ func GetRepositoriesMapByIDs(ids []int64) (map[int64]*Repository, error) {
}
// GetUserRepositories returns a list of repositories of given user.
func GetUserRepositories(opts *SearchRepoOptions) ([]*Repository, error) {
func GetUserRepositories(opts *SearchRepoOptions) ([]*Repository, int64, error) {
if len(opts.OrderBy) == 0 {
opts.OrderBy = "updated_unix DESC"
}
sess := x.
Where("owner_id = ?", opts.Actor.ID).
OrderBy(opts.OrderBy.String())
var cond = builder.NewCond()
cond = cond.And(builder.Eq{"owner_id": opts.Actor.ID})
if !opts.Private {
sess.And("is_private=?", false)
cond = cond.And(builder.Eq{"is_private": false})
}
sess = opts.setSessionPagination(sess)
sess := x.NewSession()
defer sess.Close()
count, err := sess.Where(cond).Count(new(Repository))
if err != nil {
return nil, 0, fmt.Errorf("Count: %v", err)
}
sess.Where(cond).OrderBy(opts.OrderBy.String())
repos := make([]*Repository, 0, opts.PageSize)
return repos, opts.setSessionPagination(sess).Find(&repos)
return repos, count, opts.setSessionPagination(sess).Find(&repos)
}
// GetUserMirrorRepositories returns a list of mirror repositories of given user.

View File

@@ -13,7 +13,7 @@ import (
"code.gitea.io/gitea/modules/generate"
"code.gitea.io/gitea/modules/timeutil"
gouuid "github.com/satori/go.uuid"
gouuid "github.com/google/uuid"
)
// AccessToken represents a personal access token.
@@ -45,7 +45,7 @@ func NewAccessToken(t *AccessToken) error {
return err
}
t.TokenSalt = salt
t.Token = base.EncodeSha1(gouuid.NewV4().String())
t.Token = base.EncodeSha1(gouuid.New().String())
t.TokenHash = hashToken(t.Token, t.TokenSalt)
t.TokenLastEight = t.Token[len(t.Token)-8:]
_, err = x.Insert(t)

View File

@@ -142,8 +142,8 @@ func UpdateTwoFactor(t *TwoFactor) error {
// GetTwoFactorByUID returns the two-factor authentication token associated with
// the user, if any.
func GetTwoFactorByUID(uid int64) (*TwoFactor, error) {
twofa := &TwoFactor{UID: uid}
has, err := x.Get(twofa)
twofa := &TwoFactor{}
has, err := x.Where("uid=?", uid).Get(twofa)
if err != nil {
return nil, err
} else if !has {

View File

@@ -14,7 +14,7 @@ import (
"code.gitea.io/gitea/modules/setting"
gouuid "github.com/satori/go.uuid"
gouuid "github.com/google/uuid"
"github.com/unknwon/com"
)
@@ -46,7 +46,7 @@ func (upload *Upload) LocalPath() string {
// NewUpload creates a new upload object.
func NewUpload(name string, buf []byte, file multipart.File) (_ *Upload, err error) {
upload := &Upload{
UUID: gouuid.NewV4().String(),
UUID: gouuid.New().String(),
Name: name,
}
@@ -76,8 +76,8 @@ func NewUpload(name string, buf []byte, file multipart.File) (_ *Upload, err err
// GetUploadByUUID returns the Upload by UUID
func GetUploadByUUID(uuid string) (*Upload, error) {
upload := &Upload{UUID: uuid}
has, err := x.Get(upload)
upload := &Upload{}
has, err := x.Where("uuid=?", uuid).Get(upload)
if err != nil {
return nil, err
} else if !has {

View File

@@ -645,7 +645,7 @@ func (u *User) GetOrganizationCount() (int64, error) {
// GetRepositories returns repositories that user owns, including private repositories.
func (u *User) GetRepositories(listOpts ListOptions) (err error) {
u.Repos, err = GetUserRepositories(&SearchRepoOptions{Actor: u, Private: true, ListOptions: listOpts})
u.Repos, _, err = GetUserRepositories(&SearchRepoOptions{Actor: u, Private: true, ListOptions: listOpts})
return err
}
@@ -1558,8 +1558,8 @@ func GetUserByEmailContext(ctx DBContext, email string) (*User, error) {
// Finally, if email address is the protected email address:
if strings.HasSuffix(email, fmt.Sprintf("@%s", setting.Service.NoReplyAddress)) {
username := strings.TrimSuffix(email, fmt.Sprintf("@%s", setting.Service.NoReplyAddress))
user := &User{LowerName: username}
has, err := ctx.e.Get(user)
user := &User{}
has, err := ctx.e.Where("lower_name=?", username).Get(user)
if err != nil {
return nil, err
}

View File

@@ -71,8 +71,8 @@ func GetEmailAddresses(uid int64) ([]*EmailAddress, error) {
// GetEmailAddressByID gets a user's email address by ID
func GetEmailAddressByID(uid, id int64) (*EmailAddress, error) {
// User ID is required for security reasons
email := &EmailAddress{ID: id, UID: uid}
if has, err := x.Get(email); err != nil {
email := &EmailAddress{UID: uid}
if has, err := x.ID(id).Get(email); err != nil {
return nil, err
} else if !has {
return nil, nil
@@ -126,7 +126,7 @@ func isEmailUsed(e Engine, email string) (bool, error) {
return true, nil
}
return e.Get(&EmailAddress{Email: email})
return e.Where("email=?", email).Get(&EmailAddress{})
}
// IsEmailUsed returns true if the email has been used.
@@ -251,8 +251,8 @@ func MakeEmailPrimary(email *EmailAddress) error {
return ErrEmailNotActivated
}
user := &User{ID: email.UID}
has, err = x.Get(user)
user := &User{}
has, err = x.ID(email.UID).Get(user)
if err != nil {
return err
} else if !has {

View File

@@ -111,8 +111,8 @@ func GetUserByOpenID(uri string) (*User, error) {
log.Trace("Normalized OpenID URI: " + uri)
// Otherwise, check in openid table
oid := &UserOpenID{URI: uri}
has, err := x.Get(oid)
oid := &UserOpenID{}
has, err := x.Where("uri=?", uri).Get(oid)
if err != nil {
return nil, err
}

View File

@@ -15,7 +15,7 @@ import (
api "code.gitea.io/gitea/modules/structs"
"code.gitea.io/gitea/modules/timeutil"
gouuid "github.com/satori/go.uuid"
gouuid "github.com/google/uuid"
)
// HookContentType is the content type of a web hook
@@ -769,7 +769,7 @@ func createHookTask(e Engine, t *HookTask) error {
if err != nil {
return err
}
t.UUID = gouuid.NewV4().String()
t.UUID = gouuid.New().String()
t.PayloadContent = string(data)
_, err = e.Insert(t)
return err

View File

@@ -10,6 +10,7 @@ import (
"code.gitea.io/gitea/modules/log"
"code.gitea.io/gitea/modules/setting"
uuid "github.com/google/uuid"
"github.com/lafriks/xormstore"
"github.com/markbates/goth"
"github.com/markbates/goth/gothic"
@@ -25,7 +26,6 @@ import (
"github.com/markbates/goth/providers/openidConnect"
"github.com/markbates/goth/providers/twitter"
"github.com/markbates/goth/providers/yandex"
uuid "github.com/satori/go.uuid"
"xorm.io/xorm"
)
@@ -61,7 +61,7 @@ func Init(x *xorm.Engine) error {
gothic.Store = store
gothic.SetState = func(req *http.Request) string {
return uuid.NewV4().String()
return uuid.New().String()
}
gothic.GetProviderName = func(req *http.Request) (string, error) {

View File

@@ -14,7 +14,7 @@ import (
"gitea.com/macaron/macaron"
"gitea.com/macaron/session"
gouuid "github.com/satori/go.uuid"
gouuid "github.com/google/uuid"
)
// Ensure the struct implements the interface.
@@ -92,7 +92,7 @@ func (r *ReverseProxy) newUser(ctx *macaron.Context) *models.User {
return nil
}
email := gouuid.NewV4().String() + "@localhost"
email := gouuid.New().String() + "@localhost"
if setting.Service.EnableReverseProxyEmail {
webAuthEmail := ctx.Req.Header.Get(setting.ReverseProxyAuthEmail)
if len(webAuthEmail) > 0 {

View File

@@ -17,8 +17,8 @@ import (
"gitea.com/macaron/macaron"
"gitea.com/macaron/session"
gouuid "github.com/google/uuid"
"github.com/quasoft/websspi"
gouuid "github.com/satori/go.uuid"
)
const (
@@ -157,12 +157,12 @@ func (s *SSPI) shouldAuthenticate(ctx *macaron.Context) (shouldAuth bool) {
// newUser creates a new user object for the purpose of automatic registration
// and populates its name and email with the information present in request headers.
func (s *SSPI) newUser(ctx *macaron.Context, username string, cfg *models.SSPIConfig) (*models.User, error) {
email := gouuid.NewV4().String() + "@localhost.localdomain"
email := gouuid.New().String() + "@localhost.localdomain"
user := &models.User{
Name: username,
Email: email,
KeepEmailPrivate: true,
Passwd: gouuid.NewV4().String(),
Passwd: gouuid.New().String(),
IsActive: cfg.AutoActivateUsers,
Language: cfg.DefaultLanguage,
UseCustomAvatar: true,

View File

@@ -11,7 +11,7 @@ import (
// ToCorrectPageSize makes sure page size is in allowed range.
func ToCorrectPageSize(size int) int {
if size <= 0 {
size = 10
size = setting.API.DefaultPagingNum
} else if size > setting.API.MaxResponseItems {
size = setting.API.MaxResponseItems
}

View File

@@ -46,7 +46,7 @@ func (repo *Repository) GetBranchCommitID(name string) (string, error) {
// GetTagCommitID returns last commit ID string of given tag.
func (repo *Repository) GetTagCommitID(name string) (string, error) {
stdout, err := NewCommand("rev-list", "-n", "1", name).RunInDir(repo.Path)
stdout, err := NewCommand("rev-list", "-n", "1", TagPrefix+name).RunInDir(repo.Path)
if err != nil {
if strings.Contains(err.Error(), "unknown revision or path") {
return "", ErrNotExist{name, ""}

View File

@@ -29,7 +29,7 @@ import (
"code.gitea.io/gitea/modules/structs"
"code.gitea.io/gitea/modules/timeutil"
gouuid "github.com/satori/go.uuid"
gouuid "github.com/google/uuid"
)
var (
@@ -260,7 +260,7 @@ func (g *GiteaLocalUploader) CreateReleases(releases ...*base.Release) error {
for _, asset := range release.Assets {
var attach = models.Attachment{
UUID: gouuid.NewV4().String(),
UUID: gouuid.New().String(),
Name: asset.Name,
DownloadCount: int64(*asset.DownloadCount),
Size: int64(*asset.Size),

View File

@@ -9,7 +9,6 @@ import (
"code.gitea.io/gitea/models"
"code.gitea.io/gitea/modules/git"
"code.gitea.io/gitea/modules/log"
)
// GetBranch returns a branch by its name
@@ -74,39 +73,9 @@ func CreateNewBranch(doer *models.User, repo *models.Repository, oldBranchName,
return fmt.Errorf("OldBranch: %s does not exist. Cannot create new branch from this", oldBranchName)
}
basePath, err := models.CreateTemporaryPath("branch-maker")
if err != nil {
return err
}
defer func() {
if err := models.RemoveTemporaryPath(basePath); err != nil {
log.Error("CreateNewBranch: RemoveTemporaryPath: %s", err)
}
}()
if err := git.Clone(repo.RepoPath(), basePath, git.CloneRepoOptions{
Bare: true,
Shared: true,
}); err != nil {
log.Error("Failed to clone repository: %s (%v)", repo.FullName(), err)
return fmt.Errorf("Failed to clone repository: %s (%v)", repo.FullName(), err)
}
gitRepo, err := git.OpenRepository(basePath)
if err != nil {
log.Error("Unable to open temporary repository: %s (%v)", basePath, err)
return fmt.Errorf("Failed to open new temporary repository in: %s %v", basePath, err)
}
defer gitRepo.Close()
if err = gitRepo.CreateBranch(branchName, oldBranchName); err != nil {
log.Error("Unable to create branch: %s from %s. (%v)", branchName, oldBranchName, err)
return fmt.Errorf("Unable to create branch: %s from %s. (%v)", branchName, oldBranchName, err)
}
if err = git.Push(basePath, git.PushOptions{
Remote: "origin",
Branch: branchName,
if err := git.Push(repo.RepoPath(), git.PushOptions{
Remote: repo.RepoPath(),
Branch: fmt.Sprintf("%s:%s%s", oldBranchName, git.BranchPrefix, branchName),
Env: models.PushingEnvironment(doer, repo),
}); err != nil {
if git.IsErrPushOutOfDate(err) || git.IsErrPushRejected(err) {
@@ -124,39 +93,10 @@ func CreateNewBranchFromCommit(doer *models.User, repo *models.Repository, commi
if err := checkBranchName(repo, branchName); err != nil {
return err
}
basePath, err := models.CreateTemporaryPath("branch-maker")
if err != nil {
return err
}
defer func() {
if err := models.RemoveTemporaryPath(basePath); err != nil {
log.Error("CreateNewBranchFromCommit: RemoveTemporaryPath: %s", err)
}
}()
if err := git.Clone(repo.RepoPath(), basePath, git.CloneRepoOptions{
Bare: true,
Shared: true,
}); err != nil {
log.Error("Failed to clone repository: %s (%v)", repo.FullName(), err)
return fmt.Errorf("Failed to clone repository: %s (%v)", repo.FullName(), err)
}
gitRepo, err := git.OpenRepository(basePath)
if err != nil {
log.Error("Unable to open temporary repository: %s (%v)", basePath, err)
return fmt.Errorf("Failed to open new temporary repository in: %s %v", basePath, err)
}
defer gitRepo.Close()
if err = gitRepo.CreateBranch(branchName, commit); err != nil {
log.Error("Unable to create branch: %s from %s. (%v)", branchName, commit, err)
return fmt.Errorf("Unable to create branch: %s from %s. (%v)", branchName, commit, err)
}
if err = git.Push(basePath, git.PushOptions{
Remote: "origin",
Branch: branchName,
if err := git.Push(repo.RepoPath(), git.PushOptions{
Remote: repo.RepoPath(),
Branch: fmt.Sprintf("%s:%s%s", commit, git.BranchPrefix, branchName),
Env: models.PushingEnvironment(doer, repo),
}); err != nil {
if git.IsErrPushOutOfDate(err) || git.IsErrPushRejected(err) {

View File

@@ -16,6 +16,7 @@ import (
"code.gitea.io/gitea/models"
"code.gitea.io/gitea/modules/git"
"code.gitea.io/gitea/modules/log"
"code.gitea.io/gitea/modules/setting"
"github.com/mcuadros/go-version"
"github.com/unknwon/com"
@@ -147,7 +148,7 @@ func initRepoCommit(tmpPath string, repo *models.Repository, u *models.User, def
}
if len(defaultBranch) == 0 {
defaultBranch = "master"
defaultBranch = setting.Repository.DefaultBranch
}
if stdout, err := git.NewCommand("push", "origin", "master:"+defaultBranch).

View File

@@ -105,7 +105,7 @@ func DBConnStr() (string, error) {
switch Database.Type {
case "mysql":
connType := "tcp"
if Database.Host[0] == '/' { // looks like a unix socket
if len(Database.Host) > 0 && Database.Host[0] == '/' { // looks like a unix socket
connType = "unix"
}
tls := Database.SSLMode

View File

@@ -40,6 +40,7 @@ var (
DisabledRepoUnits []string
DefaultRepoUnits []string
PrefixArchiveFiles bool
DefaultBranch string
// Repository editor settings
Editor struct {
@@ -201,6 +202,7 @@ func newRepository() {
Repository.DisableHTTPGit = sec.Key("DISABLE_HTTP_GIT").MustBool()
Repository.UseCompatSSHURI = sec.Key("USE_COMPAT_SSH_URI").MustBool()
Repository.MaxCreationLimit = sec.Key("MAX_CREATION_LIMIT").MustInt(-1)
Repository.DefaultBranch = sec.Key("DEFAULT_BRANCH").MustString("master")
RepoRootPath = sec.Key("ROOT").MustString(path.Join(homeDir, "gitea-repositories"))
forcePathSeparator(RepoRootPath)
if !filepath.IsAbs(RepoRootPath) {

View File

@@ -275,12 +275,6 @@ func TopicSearch(ctx *context.APIContext) {
kw := ctx.Query("q")
listOptions := utils.GetListOptions(ctx)
if listOptions.Page < 1 {
listOptions.Page = 1
}
if listOptions.PageSize < 1 {
listOptions.PageSize = 10
}
topics, err := models.FindTopics(&models.FindTopicOptions{
Keyword: kw,

View File

@@ -6,6 +6,7 @@ package user
import (
"net/http"
"strconv"
"code.gitea.io/gitea/models"
"code.gitea.io/gitea/modules/context"
@@ -15,10 +16,12 @@ import (
// listUserRepos - List the repositories owned by the given user.
func listUserRepos(ctx *context.APIContext, u *models.User, private bool) {
repos, err := models.GetUserRepositories(&models.SearchRepoOptions{
opts := utils.GetListOptions(ctx)
repos, count, err := models.GetUserRepositories(&models.SearchRepoOptions{
Actor: u,
Private: private,
ListOptions: utils.GetListOptions(ctx),
ListOptions: opts,
})
if err != nil {
ctx.Error(http.StatusInternalServerError, "GetUserRepositories", err)
@@ -36,6 +39,9 @@ func listUserRepos(ctx *context.APIContext, u *models.User, private bool) {
apiRepos = append(apiRepos, repos[i].APIFormat(access))
}
}
ctx.SetLinkHeader(int(count), opts.PageSize)
ctx.Header().Set("X-Total-Count", strconv.FormatInt(count, 10))
ctx.JSON(http.StatusOK, &apiRepos)
}
@@ -92,31 +98,37 @@ func ListMyRepos(ctx *context.APIContext) {
// "200":
// "$ref": "#/responses/RepositoryList"
ownRepos, err := models.GetUserRepositories(&models.SearchRepoOptions{
Actor: ctx.User,
Private: true,
ListOptions: utils.GetListOptions(ctx),
})
if err != nil {
ctx.Error(http.StatusInternalServerError, "GetUserRepositories", err)
return
opts := &models.SearchRepoOptions{
ListOptions: utils.GetListOptions(ctx),
Actor: ctx.User,
OwnerID: ctx.User.ID,
Private: ctx.IsSigned,
IncludeDescription: true,
}
accessibleReposMap, err := ctx.User.GetRepositoryAccesses()
var err error
repos, count, err := models.SearchRepository(opts)
if err != nil {
ctx.Error(http.StatusInternalServerError, "GetRepositoryAccesses", err)
ctx.Error(http.StatusInternalServerError, "SearchRepository", err)
return
}
apiRepos := make([]*api.Repository, len(ownRepos)+len(accessibleReposMap))
for i := range ownRepos {
apiRepos[i] = ownRepos[i].APIFormat(models.AccessModeOwner)
results := make([]*api.Repository, len(repos))
for i, repo := range repos {
if err = repo.GetOwner(); err != nil {
ctx.Error(http.StatusInternalServerError, "GetOwner", err)
return
}
accessMode, err := models.AccessLevel(ctx.User, repo)
if err != nil {
ctx.Error(http.StatusInternalServerError, "AccessLevel", err)
}
results[i] = repo.APIFormat(accessMode)
}
i := len(ownRepos)
for repo, access := range accessibleReposMap {
apiRepos[i] = repo.APIFormat(access)
i++
}
ctx.JSON(http.StatusOK, &apiRepos)
ctx.SetLinkHeader(int(count), opts.ListOptions.PageSize)
ctx.Header().Set("X-Total-Count", strconv.FormatInt(count, 10))
ctx.JSON(http.StatusOK, &results)
}
// ListOrgRepos - list the repositories of an organization.

View File

@@ -6,6 +6,7 @@
package repo
import (
"fmt"
"strings"
"code.gitea.io/gitea/models"
@@ -102,7 +103,11 @@ func RestoreBranchPost(ctx *context.Context) {
return
}
if err := ctx.Repo.GitRepo.CreateBranch(deletedBranch.Name, deletedBranch.Commit); err != nil {
if err := git.Push(ctx.Repo.Repository.RepoPath(), git.PushOptions{
Remote: ctx.Repo.Repository.RepoPath(),
Branch: fmt.Sprintf("%s:%s%s", deletedBranch.Commit, git.BranchPrefix, deletedBranch.Name),
Env: models.PushingEnvironment(ctx.User, ctx.Repo.Repository),
}); err != nil {
if strings.Contains(err.Error(), "already exists") {
ctx.Flash.Error(ctx.Tr("repo.branch.already_exists", deletedBranch.Name))
return
@@ -112,12 +117,6 @@ func RestoreBranchPost(ctx *context.Context) {
return
}
if err := ctx.Repo.Repository.RemoveDeletedBranch(deletedBranch.ID); err != nil {
log.Error("RemoveDeletedBranch: %v", err)
ctx.Flash.Error(ctx.Tr("repo.branch.restore_failed", deletedBranch.Name))
return
}
// Don't return error below this
if err := repofiles.PushUpdate(
ctx.Repo.Repository,
@@ -216,7 +215,7 @@ func loadBranches(ctx *context.Context) []*Branch {
}
}
divergence, divergenceError := repofiles.CountDivergingCommits(ctx.Repo.Repository, branchName)
divergence, divergenceError := repofiles.CountDivergingCommits(ctx.Repo.Repository, git.BranchPrefix+branchName)
if divergenceError != nil {
ctx.ServerError("CountDivergingCommits", divergenceError)
return nil
@@ -331,6 +330,8 @@ func CreateBranch(ctx *context.Context, form auth.NewBranchForm) {
var err error
if ctx.Repo.IsViewBranch {
err = repo_module.CreateNewBranch(ctx.User, ctx.Repo.Repository, ctx.Repo.BranchName, form.NewBranchName)
} else if ctx.Repo.IsViewTag {
err = repo_module.CreateNewBranchFromCommit(ctx.User, ctx.Repo.Repository, ctx.Repo.CommitID, form.NewBranchName)
} else {
err = repo_module.CreateNewBranchFromCommit(ctx.User, ctx.Repo.Repository, ctx.Repo.BranchName, form.NewBranchName)
}

View File

@@ -381,7 +381,20 @@ func ParseCompareInfo(ctx *context.Context) (*models.User, *models.Repository, *
return nil, nil, nil, nil, "", ""
}
compareInfo, err := headGitRepo.GetCompareInfo(baseRepo.RepoPath(), baseBranch, headBranch)
baseBranchRef := baseBranch
if baseIsBranch {
baseBranchRef = git.BranchPrefix + baseBranch
} else if baseIsTag {
baseBranchRef = git.TagPrefix + baseBranch
}
headBranchRef := headBranch
if headIsBranch {
headBranchRef = git.BranchPrefix + headBranch
} else if headIsTag {
headBranchRef = git.TagPrefix + headBranch
}
compareInfo, err := headGitRepo.GetCompareInfo(baseRepo.RepoPath(), baseBranchRef, headBranchRef)
if err != nil {
ctx.ServerError("GetCompareInfo", err)
return nil, nil, nil, nil, "", ""

View File

@@ -428,6 +428,20 @@ func PrepareViewPullInfo(ctx *context.Context, issue *models.Issue) *git.Compare
sha, err := baseGitRepo.GetRefCommitID(pull.GetGitRefName())
if err != nil {
if git.IsErrNotExist(err) {
ctx.Data["IsPullRequestBroken"] = true
if pull.IsSameRepo() {
ctx.Data["HeadTarget"] = pull.HeadBranch
} else if pull.HeadRepo == nil {
ctx.Data["HeadTarget"] = "<deleted>:" + pull.HeadBranch
} else {
ctx.Data["HeadTarget"] = pull.HeadRepo.OwnerName + ":" + pull.HeadBranch
}
ctx.Data["BaseTarget"] = pull.BaseBranch
ctx.Data["NumCommits"] = 0
ctx.Data["NumFiles"] = 0
return nil
}
ctx.ServerError(fmt.Sprintf("GetRefCommitID(%s)", pull.GetGitRefName()), err)
return nil
}
@@ -462,17 +476,15 @@ func PrepareViewPullInfo(ctx *context.Context, issue *models.Issue) *git.Compare
ctx.Data["IsPullRequestBroken"] = true
if pull.IsSameRepo() {
ctx.Data["HeadTarget"] = pull.HeadBranch
} else if pull.HeadRepo == nil {
ctx.Data["HeadTarget"] = "<deleted>:" + pull.HeadBranch
} else {
if pull.HeadRepo == nil {
ctx.Data["HeadTarget"] = "<deleted>:" + pull.HeadBranch
} else {
ctx.Data["HeadTarget"] = pull.HeadRepo.OwnerName + ":" + pull.HeadBranch
}
ctx.Data["HeadTarget"] = pull.HeadRepo.OwnerName + ":" + pull.HeadBranch
}
}
compareInfo, err := baseGitRepo.GetCompareInfo(pull.BaseRepo.RepoPath(),
pull.BaseBranch, pull.GetGitRefName())
git.BranchPrefix+pull.BaseBranch, pull.GetGitRefName())
if err != nil {
if strings.Contains(err.Error(), "fatal: Not a valid object name") {
ctx.Data["IsPullRequestBroken"] = true
@@ -950,6 +962,16 @@ func CompareAndPullRequestPost(ctx *context.Context, form auth.CreateIssueForm)
if models.IsErrUserDoesNotHaveAccessToRepo(err) {
ctx.Error(400, "UserDoesNotHaveAccessToRepo", err.Error())
return
} else if git.IsErrPushRejected(err) {
pushrejErr := err.(*git.ErrPushRejected)
message := pushrejErr.Message
if len(message) == 0 {
ctx.Flash.Error(ctx.Tr("repo.pulls.push_rejected_no_message"))
} else {
ctx.Flash.Error(ctx.Tr("repo.pulls.push_rejected", utils.SanitizeFlashErrorString(pushrejErr.Message)))
}
ctx.Redirect(ctx.Repo.RepoLink + "/pulls/" + com.ToStr(pullIssue.Index))
return
}
ctx.ServerError("NewPullRequest", err)
return

View File

@@ -69,13 +69,6 @@ func Releases(ctx *context.Context) {
IncludeTags: true,
}
if opts.ListOptions.Page <= 1 {
opts.ListOptions.Page = 1
}
if opts.ListOptions.PageSize <= 0 {
opts.ListOptions.Page = 10
}
releases, err := models.GetReleasesByRepoID(ctx.Repo.Repository.ID, opts)
if err != nil {
ctx.ServerError("GetReleasesByRepoID", err)

View File

@@ -134,6 +134,7 @@ func Create(ctx *context.Context) {
ctx.Data["readme"] = "Default"
ctx.Data["private"] = getRepoPrivate(ctx)
ctx.Data["IsForcedPrivate"] = setting.Repository.ForcePrivate
ctx.Data["default_branch"] = setting.Repository.DefaultBranch
ctxUser := checkContextUser(ctx, ctx.QueryInt64("org"))
if ctx.Written() {

View File

@@ -128,7 +128,16 @@ func ChangeTargetBranch(pr *models.PullRequest, doer *models.User, targetBranch
if pr.Status == models.PullRequestStatusChecking {
pr.Status = models.PullRequestStatusMergeable
}
if err := pr.UpdateColsIfNotMerged("merge_base", "status", "conflicted_files", "base_branch"); err != nil {
// Update Commit Divergence
divergence, err := GetDiverging(pr)
if err != nil {
return err
}
pr.CommitsAhead = divergence.Ahead
pr.CommitsBehind = divergence.Behind
if err := pr.UpdateColsIfNotMerged("merge_base", "status", "conflicted_files", "base_branch", "commits_ahead", "commits_behind"); err != nil {
return err
}
@@ -399,6 +408,16 @@ func PushToBaseRepo(pr *models.PullRequest) (err error) {
// Use InternalPushingEnvironment here because we know that pre-receive and post-receive do not run on a refs/pulls/...
Env: models.InternalPushingEnvironment(pr.Issue.Poster, pr.BaseRepo),
}); err != nil {
if git.IsErrPushOutOfDate(err) {
// This should not happen as we're using force!
log.Error("Unable to push PR head for %s#%d (%-v:%s) due to ErrPushOfDate: %v", pr.BaseRepo.FullName(), pr.Index, pr.BaseRepo, headFile, err)
return err
} else if git.IsErrPushRejected(err) {
rejectErr := err.(*git.ErrPushRejected)
log.Info("Unable to push PR head for %s#%d (%-v:%s) due to rejection:\nStdout: %s\nStderr: %s\nError: %v", pr.BaseRepo.FullName(), pr.Index, pr.BaseRepo, headFile, rejectErr.StdOut, rejectErr.StdErr, rejectErr.Err)
return err
}
log.Error("Unable to push PR head for %s#%d (%-v:%s) due to Error: %v", pr.BaseRepo.FullName(), pr.Index, pr.BaseRepo, headFile, err)
return fmt.Errorf("Push: %s:%s %s:%s %v", pr.HeadRepo.FullName(), pr.HeadBranch, pr.BaseRepo.FullName(), headFile, err)
}

View File

@@ -100,7 +100,7 @@ func createTemporaryRepo(pr *models.PullRequest) (string, error) {
outbuf.Reset()
errbuf.Reset()
if err := git.NewCommand("fetch", "origin", "--no-tags", pr.BaseBranch+":"+baseBranch, pr.BaseBranch+":original_"+baseBranch).RunInDirPipeline(tmpBasePath, &outbuf, &errbuf); err != nil {
if err := git.NewCommand("fetch", "origin", "--no-tags", "--", pr.BaseBranch+":"+baseBranch, pr.BaseBranch+":original_"+baseBranch).RunInDirPipeline(tmpBasePath, &outbuf, &errbuf); err != nil {
log.Error("Unable to fetch origin base branch [%s:%s -> base, original_base in %s]: %v:\n%s\n%s", pr.BaseRepo.FullName(), pr.BaseBranch, tmpBasePath, err, outbuf.String(), errbuf.String())
if err := models.RemoveTemporaryPath(tmpBasePath); err != nil {
log.Error("CreateTempRepo: RemoveTemporaryPath: %s", err)
@@ -140,7 +140,7 @@ func createTemporaryRepo(pr *models.PullRequest) (string, error) {
trackingBranch := "tracking"
// Fetch head branch
if err := git.NewCommand("fetch", "--no-tags", remoteRepoName, pr.HeadBranch+":"+trackingBranch).RunInDirPipeline(tmpBasePath, &outbuf, &errbuf); err != nil {
if err := git.NewCommand("fetch", "--no-tags", remoteRepoName, git.BranchPrefix+pr.HeadBranch+":"+trackingBranch).RunInDirPipeline(tmpBasePath, &outbuf, &errbuf); err != nil {
log.Error("Unable to fetch head_repo head branch [%s:%s -> tracking in %s]: %v:\n%s\n%s", pr.HeadRepo.FullName(), pr.HeadBranch, tmpBasePath, err, outbuf.String(), errbuf.String())
if err := models.RemoveTemporaryPath(tmpBasePath); err != nil {
log.Error("CreateTempRepo: RemoveTemporaryPath: %s", err)

View File

@@ -165,7 +165,7 @@
</div>
<div class="inline field">
<label for="default_branch">{{.i18n.Tr "repo.default_branch"}}</label>
<input id="default_branch" name="default_branch" value="{{.default_branch}}" placeholder="master">
<input id="default_branch" name="default_branch" value="{{.default_branch}}" placeholder="{{.default_branch}}">
</div>
</div>

View File

@@ -49,10 +49,11 @@
<div class="markdown">
<pre><code>touch README.md
git init
{{if ne .Repository.DefaultBranch "master"}}git branch -m master {{.Repository.DefaultBranch}}{{end}}
git add README.md
git commit -m "first commit"
git remote add origin <span class="clone-url">{{if $.DisableSSH}}{{$.CloneLink.HTTPS}}{{else}}{{$.CloneLink.SSH}}{{end}}</span>
git push -u origin {{if ne .Repository.DefaultBranch "master"}}master:{{.Repository.DefaultBranch}}{{else}}master{{end}}</code></pre>
git push -u origin {{.Repository.DefaultBranch}}</code></pre>
</div>
</div>
<div class="ui divider"></div>

View File

@@ -1,13 +1,11 @@
<form class="ui form ignore-dirty">
<div class="ui fluid action input">
<div class="ui search fluid action input">
<input type="hidden" name="type" value="{{$.ViewType}}"/>
<input type="hidden" name="state" value="{{$.State}}"/>
<input type="hidden" name="labels" value="{{.SelectLabels}}"/>
<input type="hidden" name="milestone" value="{{$.MilestoneID}}"/>
<input type="hidden" name="assignee" value="{{$.AssigneeID}}"/>
<div class="ui search action input">
<input name="q" value="{{.Keyword}}" placeholder="{{.i18n.Tr "explore.search"}}..." autofocus>
<button class="ui blue button" type="submit">{{.i18n.Tr "explore.search"}}</button>
</div>
<input name="q" value="{{.Keyword}}" placeholder="{{.i18n.Tr "explore.search"}}..." autofocus>
<button class="ui blue button" type="submit">{{.i18n.Tr "explore.search"}}</button>
</div>
</form>

View File

@@ -53,7 +53,7 @@
<tbody>
{{if .HasParentPath}}
<tr class="has-parent">
<td colspan="3">{{svg "octicon-mail-reply" 16}}<a href="{{EscapePound .BranchLink}}{{.ParentPath}}">..</a></td>
<td colspan="3">{{svg "octicon-reply" 16}}<a href="{{EscapePound .BranchLink}}{{.ParentPath}}">..</a></td>
</tr>
{{end}}
{{range $item := .Files}}

View File

@@ -67,15 +67,13 @@
</div>
<div class="column center aligned">
<form class="ui form ignore-dirty">
<div class="ui fluid action input">
<div class="ui search fluid action input">
<input type="hidden" name="type" value="{{$.ViewType}}"/>
<input type="hidden" name="repos" value="[{{range $.RepoIDs}}{{.}}%2C{{end}}]"/>
<input type="hidden" name="sort" value="{{$.SortType}}"/>
<input type="hidden" name="state" value="{{$.State}}"/>
<div class="ui search action input">
<input name="q" value="{{$.Keyword}}" placeholder="{{.i18n.Tr "explore.search"}}..." autofocus>
<button class="ui blue button" type="submit">{{.i18n.Tr "explore.search"}}</button>
</div>
<input name="q" value="{{$.Keyword}}" placeholder="{{.i18n.Tr "explore.search"}}..." autofocus>
<button class="ui blue button" type="submit">{{.i18n.Tr "explore.search"}}</button>
</div>
</form>
</div>

View File

@@ -101,7 +101,7 @@ is supported by go-git.
| http(s):// (smart) | ✔ |
| git:// | ✔ |
| ssh:// | ✔ |
| file:// | |
| file:// | partial | Warning: this is not pure Golang. This shells out to the `git` binary. |
| custom | ✔ |
| **other features** |
| gitignore | ✔ |

View File

@@ -1,9 +1,9 @@
![go-git logo](https://cdn.rawgit.com/src-d/artwork/02036484/go-git/files/go-git-github-readme-header.png)
[![GoDoc](https://godoc.org/github.com/go-git/go-git/v5?status.svg)](https://godoc.org/github.com/src-d/go-git) [![Build Status](https://github.com/go-git/go-git/workflows/Test%20&%20Coverage/badge.svg)](https://github.com/go-git/go-git/actions) [![Go Report Card](https://goreportcard.com/badge/github.com/src-d/go-git)](https://goreportcard.com/report/github.com/src-d/go-git)
[![GoDoc](https://godoc.org/github.com/go-git/go-git/v5?status.svg)](https://pkg.go.dev/github.com/go-git/go-git/v5) [![Build Status](https://github.com/go-git/go-git/workflows/Test/badge.svg)](https://github.com/go-git/go-git/actions) [![Go Report Card](https://goreportcard.com/badge/github.com/go-git/go-git)](https://goreportcard.com/report/github.com/go-git/go-git)
*go-git* is a highly extensible git implementation library written in **pure Go**.
It can be used to manipulate git repositories at low level *(plumbing)* or high level *(porcelain)*, through an idiomatic Go API. It also supports several types of storage, such as in-memory filesystems, or custom implementations, thanks to the [`Storer`](https://godoc.org/github.com/go-git/go-git/v5/plumbing/storer) interface.
It can be used to manipulate git repositories at low level *(plumbing)* or high level *(porcelain)*, through an idiomatic Go API. It also supports several types of storage, such as in-memory filesystems, or custom implementations, thanks to the [`Storer`](https://pkg.go.dev/github.com/go-git/go-git/v5/plumbing/storer) interface.
It's being actively developed since 2015 and is being used extensively by [Keybase](https://keybase.io/blog/encrypted-git-for-everyone), [Gitea](https://gitea.io/en-us/) or [Pulumi](https://github.com/search?q=org%3Apulumi+go-git&type=Code), and by many other libraries and tools.
@@ -12,7 +12,7 @@ Project Status
After the legal issues with the [`src-d`](https://github.com/src-d) organization, the lack of update for four months and the requirement to make a hard fork, the project is **now back to normality**.
The project is currently actively maintained by individual contributors, including several of the original authors, but also backed by a new company `gitsigth` where `go-git` is a critical component used at scale.
The project is currently actively maintained by individual contributors, including several of the original authors, but also backed by a new company, [gitsight](https://github.com/gitsight), where `go-git` is a critical component used at scale.
Comparison with git
@@ -37,7 +37,7 @@ import "github.com/go-git/go-git" // with go modules disabled
Examples
--------
> Please note that the `CheckIfError` and `Info` functions used in the examples are from the [examples package](https://github.com/src-d/go-git/blob/master/_examples/common.go#L17) just to be used in the examples.
> Please note that the `CheckIfError` and `Info` functions used in the examples are from the [examples package](https://github.com/go-git/go-git/blob/master/_examples/common.go#L19) just to be used in the examples.
### Basic example

View File

@@ -5,11 +5,16 @@ import (
"bytes"
"errors"
"fmt"
"io"
"io/ioutil"
"os"
"path/filepath"
"sort"
"strconv"
"github.com/go-git/go-git/v5/internal/url"
format "github.com/go-git/go-git/v5/plumbing/format/config"
"github.com/mitchellh/go-homedir"
)
const (
@@ -32,6 +37,16 @@ var (
ErrRemoteConfigEmptyName = errors.New("remote config: empty name")
)
// Scope defines the scope of a config file, such as local, global or system.
type Scope int
// Available ConfigScope's
const (
LocalScope Scope = iota
GlobalScope
SystemScope
)
// Config contains the repository configuration
// https://www.kernel.org/pub/software/scm/git/docs/git-config.html#FILES
type Config struct {
@@ -46,6 +61,27 @@ type Config struct {
CommentChar string
}
User struct {
// Name is the personal name of the author and the commiter of a commit.
Name string
// Email is the email of the author and the commiter of a commit.
Email string
}
Author struct {
// Name is the personal name of the author of a commit.
Name string
// Email is the email of the author of a commit.
Email string
}
Committer struct {
// Name is the personal name of the commiter of a commit.
Name string
// Email is the email of the the commiter of a commit.
Email string
}
Pack struct {
// Window controls the size of the sliding window for delta
// compression. The default is 10. A value of 0 turns off
@@ -82,6 +118,77 @@ func NewConfig() *Config {
return config
}
// ReadConfig reads a config file from a io.Reader.
func ReadConfig(r io.Reader) (*Config, error) {
b, err := ioutil.ReadAll(r)
if err != nil {
return nil, err
}
cfg := NewConfig()
if err = cfg.Unmarshal(b); err != nil {
return nil, err
}
return cfg, nil
}
// LoadConfig loads a config file from a given scope. The returned Config,
// contains exclusively information fom the given scope. If couldn't find a
// config file to the given scope, a empty one is returned.
func LoadConfig(scope Scope) (*Config, error) {
if scope == LocalScope {
return nil, fmt.Errorf("LocalScope should be read from the a ConfigStorer.")
}
files, err := Paths(scope)
if err != nil {
return nil, err
}
for _, file := range files {
f, err := os.Open(file)
if err != nil {
if os.IsNotExist(err) {
continue
}
return nil, err
}
defer f.Close()
return ReadConfig(f)
}
return NewConfig(), nil
}
// Paths returns the config file location for a given scope.
func Paths(scope Scope) ([]string, error) {
var files []string
switch scope {
case GlobalScope:
xdg := os.Getenv("XDG_CONFIG_HOME")
if xdg != "" {
files = append(files, filepath.Join(xdg, "git/config"))
}
home, err := homedir.Dir()
if err != nil {
return nil, err
}
files = append(files,
filepath.Join(home, ".gitconfig"),
filepath.Join(home, ".config/git/config"),
)
case SystemScope:
files = append(files, "/etc/gitconfig")
}
return files, nil
}
// Validate validates the fields and sets the default values.
func (c *Config) Validate() error {
for name, r := range c.Remotes {
@@ -113,6 +220,9 @@ const (
branchSection = "branch"
coreSection = "core"
packSection = "pack"
userSection = "user"
authorSection = "author"
committerSection = "committer"
fetchKey = "fetch"
urlKey = "url"
bareKey = "bare"
@@ -121,6 +231,8 @@ const (
windowKey = "window"
mergeKey = "merge"
rebaseKey = "rebase"
nameKey = "name"
emailKey = "email"
// DefaultPackWindow holds the number of previous objects used to
// generate deltas. The value 10 is the same used by git command.
@@ -138,6 +250,7 @@ func (c *Config) Unmarshal(b []byte) error {
}
c.unmarshalCore()
c.unmarshalUser()
if err := c.unmarshalPack(); err != nil {
return err
}
@@ -160,6 +273,20 @@ func (c *Config) unmarshalCore() {
c.Core.CommentChar = s.Options.Get(commentCharKey)
}
func (c *Config) unmarshalUser() {
s := c.Raw.Section(userSection)
c.User.Name = s.Options.Get(nameKey)
c.User.Email = s.Options.Get(emailKey)
s = c.Raw.Section(authorSection)
c.Author.Name = s.Options.Get(nameKey)
c.Author.Email = s.Options.Get(emailKey)
s = c.Raw.Section(committerSection)
c.Committer.Name = s.Options.Get(nameKey)
c.Committer.Email = s.Options.Get(emailKey)
}
func (c *Config) unmarshalPack() error {
s := c.Raw.Section(packSection)
window := s.Options.Get(windowKey)
@@ -220,6 +347,7 @@ func (c *Config) unmarshalBranches() error {
// Marshal returns Config encoded as a git-config file.
func (c *Config) Marshal() ([]byte, error) {
c.marshalCore()
c.marshalUser()
c.marshalPack()
c.marshalRemotes()
c.marshalSubmodules()
@@ -242,6 +370,35 @@ func (c *Config) marshalCore() {
}
}
func (c *Config) marshalUser() {
s := c.Raw.Section(userSection)
if c.User.Name != "" {
s.SetOption(nameKey, c.User.Name)
}
if c.User.Email != "" {
s.SetOption(emailKey, c.User.Email)
}
s = c.Raw.Section(authorSection)
if c.Author.Name != "" {
s.SetOption(nameKey, c.Author.Name)
}
if c.Author.Email != "" {
s.SetOption(emailKey, c.Author.Email)
}
s = c.Raw.Section(committerSection)
if c.Committer.Name != "" {
s.SetOption(nameKey, c.Committer.Name)
}
if c.Committer.Email != "" {
s.SetOption(emailKey, c.Committer.Email)
}
}
func (c *Config) marshalPack() {
s := c.Raw.Section(packSection)
if c.Pack.Window != DefaultPackWindow {

View File

@@ -25,7 +25,7 @@ var (
// reference even if it isnt a fast-forward.
// eg.: "+refs/heads/*:refs/remotes/origin/*"
//
// https://git-scm.com/book/es/v2/Git-Internals-The-Refspec
// https://git-scm.com/book/en/v2/Git-Internals-The-Refspec
type RefSpec string
// Validate validates the RefSpec
@@ -59,6 +59,11 @@ func (s RefSpec) IsDelete() bool {
return s[0] == refSpecSeparator[0]
}
// IsExactSHA1 returns true if the source is a SHA1 hash.
func (s RefSpec) IsExactSHA1() bool {
return plumbing.IsHash(s.Src())
}
// Src return the src side.
func (s RefSpec) Src() string {
spec := string(s)
@@ -69,8 +74,8 @@ func (s RefSpec) Src() string {
} else {
start = 0
}
end := strings.Index(spec, refSpecSeparator)
end := strings.Index(spec, refSpecSeparator)
return spec[start:end]
}

View File

@@ -10,6 +10,7 @@ require (
github.com/go-git/go-billy/v5 v5.0.0
github.com/go-git/go-git-fixtures/v4 v4.0.1
github.com/google/go-cmp v0.3.0
github.com/imdario/mergo v0.3.9
github.com/jbenet/go-context v0.0.0-20150711004518-d14ea06fba99
github.com/jessevdk/go-flags v1.4.0
github.com/kevinburke/ssh_config v0.0.0-20190725054713-01f96b0aa0cd

View File

@@ -22,6 +22,8 @@ github.com/go-git/go-git-fixtures/v4 v4.0.1 h1:q+IFMfLx200Q3scvt2hN79JsEzy4AmBTp
github.com/go-git/go-git-fixtures/v4 v4.0.1/go.mod h1:m+ICp2rF3jDhFgEZ/8yziagdT1C+ZpZcrJjappBCDSw=
github.com/google/go-cmp v0.3.0 h1:crn/baboCvb5fXaQ0IJ1SGTsTVrWpDsCWC8EGETZijY=
github.com/google/go-cmp v0.3.0/go.mod h1:8QqcDgzrUqlUb/G2PQTWiueGozuR1884gddMywk6iLU=
github.com/imdario/mergo v0.3.9 h1:UauaLniWCFHWd+Jp9oCEkTBj8VO/9DKg3PV3VCNMDIg=
github.com/imdario/mergo v0.3.9/go.mod h1:2EnlNZ0deacrJVfApfmtdGgDfMuh/nq6Ok1EcJh5FfA=
github.com/jbenet/go-context v0.0.0-20150711004518-d14ea06fba99 h1:BQSFePA1RWJOlocH6Fxy8MmwDt+yVQYULKfN0RoTN8A=
github.com/jbenet/go-context v0.0.0-20150711004518-d14ea06fba99/go.mod h1:1lJo3i6rXxKeerYnT8Nvf0QmHCRC1n8sfWVwXF2Frvo=
github.com/jessevdk/go-flags v1.4.0 h1:4IU2WS7AumrZ/40jfhf4QVDMsQwqA7VEHozFRrGARJA=

View File

@@ -6,12 +6,12 @@ import (
"strings"
"time"
"golang.org/x/crypto/openpgp"
"github.com/go-git/go-git/v5/config"
"github.com/go-git/go-git/v5/plumbing"
"github.com/go-git/go-git/v5/plumbing/object"
"github.com/go-git/go-git/v5/plumbing/protocol/packp/sideband"
"github.com/go-git/go-git/v5/plumbing/transport"
"golang.org/x/crypto/openpgp"
)
// SubmoduleRescursivity defines how depth will affect any submodule recursive
@@ -190,6 +190,9 @@ type PushOptions struct {
// Prune specify that remote refs that match given RefSpecs and that do
// not exist locally will be removed.
Prune bool
// Force allows the push to update a remote branch even when the local
// branch does not descend from it.
Force bool
}
// Validate validates the fields and sets the default values.
@@ -375,7 +378,8 @@ type CommitOptions struct {
// All automatically stage files that have been modified and deleted, but
// new files you have not told Git about are not affected.
All bool
// Author is the author's signature of the commit.
// Author is the author's signature of the commit. If Author is empty the
// Name and Email is read from the config, and time.Now it's used as When.
Author *object.Signature
// Committer is the committer's signature of the commit. If Committer is
// nil the Author signature is used.
@@ -392,7 +396,9 @@ type CommitOptions struct {
// Validate validates the fields and sets the default values.
func (o *CommitOptions) Validate(r *Repository) error {
if o.Author == nil {
return ErrMissingAuthor
if err := o.loadConfigAuthorAndCommitter(r); err != nil {
return err
}
}
if o.Committer == nil {
@@ -413,6 +419,43 @@ func (o *CommitOptions) Validate(r *Repository) error {
return nil
}
func (o *CommitOptions) loadConfigAuthorAndCommitter(r *Repository) error {
cfg, err := r.ConfigScoped(config.SystemScope)
if err != nil {
return err
}
if o.Author == nil && cfg.Author.Email != "" && cfg.Author.Name != "" {
o.Author = &object.Signature{
Name: cfg.Author.Name,
Email: cfg.Author.Email,
When: time.Now(),
}
}
if o.Committer == nil && cfg.Committer.Email != "" && cfg.Committer.Name != "" {
o.Committer = &object.Signature{
Name: cfg.Committer.Name,
Email: cfg.Committer.Email,
When: time.Now(),
}
}
if o.Author == nil && cfg.User.Email != "" && cfg.User.Name != "" {
o.Author = &object.Signature{
Name: cfg.User.Name,
Email: cfg.User.Email,
When: time.Now(),
}
}
if o.Author == nil {
return ErrMissingAuthor
}
return nil
}
var (
ErrMissingName = errors.New("name field is required")
ErrMissingTagger = errors.New("tagger field is required")

View File

@@ -0,0 +1,38 @@
package color
// TODO read colors from a github.com/go-git/go-git/plumbing/format/config.Config struct
// TODO implement color parsing, see https://github.com/git/git/blob/v2.26.2/color.c
// Colors. See https://github.com/git/git/blob/v2.26.2/color.h#L24-L53.
const (
Normal = ""
Reset = "\033[m"
Bold = "\033[1m"
Red = "\033[31m"
Green = "\033[32m"
Yellow = "\033[33m"
Blue = "\033[34m"
Magenta = "\033[35m"
Cyan = "\033[36m"
BoldRed = "\033[1;31m"
BoldGreen = "\033[1;32m"
BoldYellow = "\033[1;33m"
BoldBlue = "\033[1;34m"
BoldMagenta = "\033[1;35m"
BoldCyan = "\033[1;36m"
FaintRed = "\033[2;31m"
FaintGreen = "\033[2;32m"
FaintYellow = "\033[2;33m"
FaintBlue = "\033[2;34m"
FaintMagenta = "\033[2;35m"
FaintCyan = "\033[2;36m"
BgRed = "\033[41m"
BgGreen = "\033[42m"
BgYellow = "\033[43m"
BgBlue = "\033[44m"
BgMagenta = "\033[45m"
BgCyan = "\033[46m"
Faint = "\033[2m"
FaintItalic = "\033[2;3m"
Reverse = "\033[7m"
)

View File

@@ -0,0 +1,97 @@
package diff
import "github.com/go-git/go-git/v5/plumbing/color"
// A ColorKey is a key into a ColorConfig map and also equal to the key in the
// diff.color subsection of the config. See
// https://github.com/git/git/blob/v2.26.2/diff.c#L83-L106.
type ColorKey string
// ColorKeys.
const (
Context ColorKey = "context"
Meta ColorKey = "meta"
Frag ColorKey = "frag"
Old ColorKey = "old"
New ColorKey = "new"
Commit ColorKey = "commit"
Whitespace ColorKey = "whitespace"
Func ColorKey = "func"
OldMoved ColorKey = "oldMoved"
OldMovedAlternative ColorKey = "oldMovedAlternative"
OldMovedDimmed ColorKey = "oldMovedDimmed"
OldMovedAlternativeDimmed ColorKey = "oldMovedAlternativeDimmed"
NewMoved ColorKey = "newMoved"
NewMovedAlternative ColorKey = "newMovedAlternative"
NewMovedDimmed ColorKey = "newMovedDimmed"
NewMovedAlternativeDimmed ColorKey = "newMovedAlternativeDimmed"
ContextDimmed ColorKey = "contextDimmed"
OldDimmed ColorKey = "oldDimmed"
NewDimmed ColorKey = "newDimmed"
ContextBold ColorKey = "contextBold"
OldBold ColorKey = "oldBold"
NewBold ColorKey = "newBold"
)
// A ColorConfig is a color configuration. A nil or empty ColorConfig
// corresponds to no color.
type ColorConfig map[ColorKey]string
// A ColorConfigOption sets an option on a ColorConfig.
type ColorConfigOption func(ColorConfig)
// WithColor sets the color for key.
func WithColor(key ColorKey, color string) ColorConfigOption {
return func(cc ColorConfig) {
cc[key] = color
}
}
// defaultColorConfig is the default color configuration. See
// https://github.com/git/git/blob/v2.26.2/diff.c#L57-L81.
var defaultColorConfig = ColorConfig{
Context: color.Normal,
Meta: color.Bold,
Frag: color.Cyan,
Old: color.Red,
New: color.Green,
Commit: color.Yellow,
Whitespace: color.BgRed,
Func: color.Normal,
OldMoved: color.BoldMagenta,
OldMovedAlternative: color.BoldBlue,
OldMovedDimmed: color.Faint,
OldMovedAlternativeDimmed: color.FaintItalic,
NewMoved: color.BoldCyan,
NewMovedAlternative: color.BoldYellow,
NewMovedDimmed: color.Faint,
NewMovedAlternativeDimmed: color.FaintItalic,
ContextDimmed: color.Faint,
OldDimmed: color.FaintRed,
NewDimmed: color.FaintGreen,
ContextBold: color.Bold,
OldBold: color.BoldRed,
NewBold: color.BoldGreen,
}
// NewColorConfig returns a new ColorConfig.
func NewColorConfig(options ...ColorConfigOption) ColorConfig {
cc := make(ColorConfig)
for key, value := range defaultColorConfig {
cc[key] = value
}
for _, option := range options {
option(cc)
}
return cc
}
// Reset returns the ANSI escape sequence to reset the color with key set from
// cc. If no color was set then no reset is needed so it returns the empty
// string.
func (cc ColorConfig) Reset(key ColorKey) string {
if cc[key] == "" {
return ""
}
return color.Reset
}

View File

@@ -1,157 +1,158 @@
package diff
import (
"bytes"
"fmt"
"io"
"regexp"
"strconv"
"strings"
"github.com/go-git/go-git/v5/plumbing"
)
const (
diffInit = "diff --git a/%s b/%s\n"
// DefaultContextLines is the default number of context lines.
const DefaultContextLines = 3
chunkStart = "@@ -"
chunkMiddle = " +"
chunkEnd = " @@%s\n"
chunkCount = "%d,%d"
var (
splitLinesRegexp = regexp.MustCompile(`[^\n]*(\n|$)`)
noFilePath = "/dev/null"
aDir = "a/"
bDir = "b/"
operationChar = map[Operation]byte{
Add: '+',
Delete: '-',
Equal: ' ',
}
fPath = "--- %s\n"
tPath = "+++ %s\n"
binary = "Binary files %s and %s differ\n"
addLine = "+%s%s"
deleteLine = "-%s%s"
equalLine = " %s%s"
noNewLine = "\n\\ No newline at end of file\n"
oldMode = "old mode %o\n"
newMode = "new mode %o\n"
deletedFileMode = "deleted file mode %o\n"
newFileMode = "new file mode %o\n"
renameFrom = "from"
renameTo = "to"
renameFileMode = "rename %s %s\n"
indexAndMode = "index %s..%s %o\n"
indexNoMode = "index %s..%s\n"
DefaultContextLines = 3
operationColorKey = map[Operation]ColorKey{
Add: New,
Delete: Old,
Equal: Context,
}
)
// UnifiedEncoder encodes an unified diff into the provided Writer.
// There are some unsupported features:
// - Similarity index for renames
// - Sort hash representation
// UnifiedEncoder encodes an unified diff into the provided Writer. It does not
// support similarity index for renames or sorting hash representations.
type UnifiedEncoder struct {
io.Writer
// ctxLines is the count of unchanged lines that will appear
// surrounding a change.
ctxLines int
// contextLines is the count of unchanged lines that will appear surrounding
// a change.
contextLines int
buf bytes.Buffer
// colorConfig is the color configuration. The default is no color.
color ColorConfig
}
func NewUnifiedEncoder(w io.Writer, ctxLines int) *UnifiedEncoder {
return &UnifiedEncoder{ctxLines: ctxLines, Writer: w}
// NewUnifiedEncoder returns a new UnifiedEncoder that writes to w.
func NewUnifiedEncoder(w io.Writer, contextLines int) *UnifiedEncoder {
return &UnifiedEncoder{
Writer: w,
contextLines: contextLines,
}
}
// SetColor sets e's color configuration and returns e.
func (e *UnifiedEncoder) SetColor(colorConfig ColorConfig) *UnifiedEncoder {
e.color = colorConfig
return e
}
// Encode encodes patch.
func (e *UnifiedEncoder) Encode(patch Patch) error {
e.printMessage(patch.Message())
sb := &strings.Builder{}
if err := e.encodeFilePatch(patch.FilePatches()); err != nil {
return err
if message := patch.Message(); message != "" {
sb.WriteString(message)
if !strings.HasSuffix(message, "\n") {
sb.WriteByte('\n')
}
}
_, err := e.buf.WriteTo(e)
for _, filePatch := range patch.FilePatches() {
e.writeFilePatchHeader(sb, filePatch)
g := newHunksGenerator(filePatch.Chunks(), e.contextLines)
for _, hunk := range g.Generate() {
hunk.writeTo(sb, e.color)
}
}
_, err := e.Write([]byte(sb.String()))
return err
}
func (e *UnifiedEncoder) encodeFilePatch(filePatches []FilePatch) error {
for _, p := range filePatches {
f, t := p.Files()
if err := e.header(f, t, p.IsBinary()); err != nil {
return err
}
g := newHunksGenerator(p.Chunks(), e.ctxLines)
for _, c := range g.Generate() {
c.WriteTo(&e.buf)
}
func (e *UnifiedEncoder) writeFilePatchHeader(sb *strings.Builder, filePatch FilePatch) {
from, to := filePatch.Files()
if from == nil && to == nil {
return
}
isBinary := filePatch.IsBinary()
return nil
}
func (e *UnifiedEncoder) printMessage(message string) {
isEmpty := message == ""
hasSuffix := strings.HasSuffix(message, "\n")
if !isEmpty && !hasSuffix {
message += "\n"
}
e.buf.WriteString(message)
}
func (e *UnifiedEncoder) header(from, to File, isBinary bool) error {
var lines []string
switch {
case from == nil && to == nil:
return nil
case from != nil && to != nil:
hashEquals := from.Hash() == to.Hash()
fmt.Fprintf(&e.buf, diffInit, from.Path(), to.Path())
lines = append(lines,
fmt.Sprintf("diff --git a/%s b/%s", from.Path(), to.Path()),
)
if from.Mode() != to.Mode() {
fmt.Fprintf(&e.buf, oldMode+newMode, from.Mode(), to.Mode())
lines = append(lines,
fmt.Sprintf("old mode %o", from.Mode()),
fmt.Sprintf("new mode %o", to.Mode()),
)
}
if from.Path() != to.Path() {
fmt.Fprintf(&e.buf,
renameFileMode+renameFileMode,
renameFrom, from.Path(), renameTo, to.Path())
lines = append(lines,
fmt.Sprintf("rename from %s", from.Path()),
fmt.Sprintf("rename to %s", to.Path()),
)
}
if from.Mode() != to.Mode() && !hashEquals {
fmt.Fprintf(&e.buf, indexNoMode, from.Hash(), to.Hash())
lines = append(lines,
fmt.Sprintf("index %s..%s", from.Hash(), to.Hash()),
)
} else if !hashEquals {
fmt.Fprintf(&e.buf, indexAndMode, from.Hash(), to.Hash(), from.Mode())
lines = append(lines,
fmt.Sprintf("index %s..%s %o", from.Hash(), to.Hash(), from.Mode()),
)
}
if !hashEquals {
e.pathLines(isBinary, aDir+from.Path(), bDir+to.Path())
lines = e.appendPathLines(lines, "a/"+from.Path(), "b/"+to.Path(), isBinary)
}
case from == nil:
fmt.Fprintf(&e.buf, diffInit, to.Path(), to.Path())
fmt.Fprintf(&e.buf, newFileMode, to.Mode())
fmt.Fprintf(&e.buf, indexNoMode, plumbing.ZeroHash, to.Hash())
e.pathLines(isBinary, noFilePath, bDir+to.Path())
lines = append(lines,
fmt.Sprintf("diff --git a/%s b/%s", to.Path(), to.Path()),
fmt.Sprintf("new file mode %o", to.Mode()),
fmt.Sprintf("index %s..%s", plumbing.ZeroHash, to.Hash()),
)
lines = e.appendPathLines(lines, "/dev/null", "b/"+to.Path(), isBinary)
case to == nil:
fmt.Fprintf(&e.buf, diffInit, from.Path(), from.Path())
fmt.Fprintf(&e.buf, deletedFileMode, from.Mode())
fmt.Fprintf(&e.buf, indexNoMode, from.Hash(), plumbing.ZeroHash)
e.pathLines(isBinary, aDir+from.Path(), noFilePath)
lines = append(lines,
fmt.Sprintf("diff --git a/%s b/%s", from.Path(), from.Path()),
fmt.Sprintf("deleted file mode %o", from.Mode()),
fmt.Sprintf("index %s..%s", from.Hash(), plumbing.ZeroHash),
)
lines = e.appendPathLines(lines, "a/"+from.Path(), "/dev/null", isBinary)
}
return nil
sb.WriteString(e.color[Meta])
sb.WriteString(lines[0])
for _, line := range lines[1:] {
sb.WriteByte('\n')
sb.WriteString(line)
}
sb.WriteString(e.color.Reset(Meta))
sb.WriteByte('\n')
}
func (e *UnifiedEncoder) pathLines(isBinary bool, fromPath, toPath string) {
format := fPath + tPath
func (e *UnifiedEncoder) appendPathLines(lines []string, fromPath, toPath string, isBinary bool) []string {
if isBinary {
format = binary
return append(lines,
fmt.Sprintf("Binary files %s and %s differ", fromPath, toPath),
)
}
fmt.Fprintf(&e.buf, format, fromPath, toPath)
return append(lines,
fmt.Sprintf("--- %s", fromPath),
fmt.Sprintf("+++ %s", toPath),
)
}
type hunksGenerator struct {
@@ -170,84 +171,84 @@ func newHunksGenerator(chunks []Chunk, ctxLines int) *hunksGenerator {
}
}
func (c *hunksGenerator) Generate() []*hunk {
for i, chunk := range c.chunks {
ls := splitLines(chunk.Content())
lsLen := len(ls)
func (g *hunksGenerator) Generate() []*hunk {
for i, chunk := range g.chunks {
lines := splitLines(chunk.Content())
nLines := len(lines)
switch chunk.Type() {
case Equal:
c.fromLine += lsLen
c.toLine += lsLen
c.processEqualsLines(ls, i)
g.fromLine += nLines
g.toLine += nLines
g.processEqualsLines(lines, i)
case Delete:
if lsLen != 0 {
c.fromLine++
if nLines != 0 {
g.fromLine++
}
c.processHunk(i, chunk.Type())
c.fromLine += lsLen - 1
c.current.AddOp(chunk.Type(), ls...)
g.processHunk(i, chunk.Type())
g.fromLine += nLines - 1
g.current.AddOp(chunk.Type(), lines...)
case Add:
if lsLen != 0 {
c.toLine++
if nLines != 0 {
g.toLine++
}
c.processHunk(i, chunk.Type())
c.toLine += lsLen - 1
c.current.AddOp(chunk.Type(), ls...)
g.processHunk(i, chunk.Type())
g.toLine += nLines - 1
g.current.AddOp(chunk.Type(), lines...)
}
if i == len(c.chunks)-1 && c.current != nil {
c.hunks = append(c.hunks, c.current)
if i == len(g.chunks)-1 && g.current != nil {
g.hunks = append(g.hunks, g.current)
}
}
return c.hunks
return g.hunks
}
func (c *hunksGenerator) processHunk(i int, op Operation) {
if c.current != nil {
func (g *hunksGenerator) processHunk(i int, op Operation) {
if g.current != nil {
return
}
var ctxPrefix string
linesBefore := len(c.beforeContext)
if linesBefore > c.ctxLines {
ctxPrefix = " " + c.beforeContext[linesBefore-c.ctxLines-1]
c.beforeContext = c.beforeContext[linesBefore-c.ctxLines:]
linesBefore = c.ctxLines
linesBefore := len(g.beforeContext)
if linesBefore > g.ctxLines {
ctxPrefix = g.beforeContext[linesBefore-g.ctxLines-1]
g.beforeContext = g.beforeContext[linesBefore-g.ctxLines:]
linesBefore = g.ctxLines
}
c.current = &hunk{ctxPrefix: strings.TrimSuffix(ctxPrefix, "\n")}
c.current.AddOp(Equal, c.beforeContext...)
g.current = &hunk{ctxPrefix: strings.TrimSuffix(ctxPrefix, "\n")}
g.current.AddOp(Equal, g.beforeContext...)
switch op {
case Delete:
c.current.fromLine, c.current.toLine =
c.addLineNumbers(c.fromLine, c.toLine, linesBefore, i, Add)
g.current.fromLine, g.current.toLine =
g.addLineNumbers(g.fromLine, g.toLine, linesBefore, i, Add)
case Add:
c.current.toLine, c.current.fromLine =
c.addLineNumbers(c.toLine, c.fromLine, linesBefore, i, Delete)
g.current.toLine, g.current.fromLine =
g.addLineNumbers(g.toLine, g.fromLine, linesBefore, i, Delete)
}
c.beforeContext = nil
g.beforeContext = nil
}
// addLineNumbers obtains the line numbers in a new chunk
func (c *hunksGenerator) addLineNumbers(la, lb int, linesBefore int, i int, op Operation) (cla, clb int) {
// addLineNumbers obtains the line numbers in a new chunk.
func (g *hunksGenerator) addLineNumbers(la, lb int, linesBefore int, i int, op Operation) (cla, clb int) {
cla = la - linesBefore
// we need to search for a reference for the next diff
switch {
case linesBefore != 0 && c.ctxLines != 0:
if lb > c.ctxLines {
clb = lb - c.ctxLines + 1
case linesBefore != 0 && g.ctxLines != 0:
if lb > g.ctxLines {
clb = lb - g.ctxLines + 1
} else {
clb = 1
}
case c.ctxLines == 0:
case g.ctxLines == 0:
clb = lb
case i != len(c.chunks)-1:
next := c.chunks[i+1]
case i != len(g.chunks)-1:
next := g.chunks[i+1]
if next.Type() == op || next.Type() == Equal {
// this diff will be into this chunk
clb = lb + 1
@@ -257,34 +258,32 @@ func (c *hunksGenerator) addLineNumbers(la, lb int, linesBefore int, i int, op O
return
}
func (c *hunksGenerator) processEqualsLines(ls []string, i int) {
if c.current == nil {
c.beforeContext = append(c.beforeContext, ls...)
func (g *hunksGenerator) processEqualsLines(ls []string, i int) {
if g.current == nil {
g.beforeContext = append(g.beforeContext, ls...)
return
}
c.afterContext = append(c.afterContext, ls...)
if len(c.afterContext) <= c.ctxLines*2 && i != len(c.chunks)-1 {
c.current.AddOp(Equal, c.afterContext...)
c.afterContext = nil
g.afterContext = append(g.afterContext, ls...)
if len(g.afterContext) <= g.ctxLines*2 && i != len(g.chunks)-1 {
g.current.AddOp(Equal, g.afterContext...)
g.afterContext = nil
} else {
ctxLines := c.ctxLines
if ctxLines > len(c.afterContext) {
ctxLines = len(c.afterContext)
ctxLines := g.ctxLines
if ctxLines > len(g.afterContext) {
ctxLines = len(g.afterContext)
}
c.current.AddOp(Equal, c.afterContext[:ctxLines]...)
c.hunks = append(c.hunks, c.current)
g.current.AddOp(Equal, g.afterContext[:ctxLines]...)
g.hunks = append(g.hunks, g.current)
c.current = nil
c.beforeContext = c.afterContext[ctxLines:]
c.afterContext = nil
g.current = nil
g.beforeContext = g.afterContext[ctxLines:]
g.afterContext = nil
}
}
var splitLinesRE = regexp.MustCompile(`[^\n]*(\n|$)`)
func splitLines(s string) []string {
out := splitLinesRE.FindAllString(s, -1)
out := splitLinesRegexp.FindAllString(s, -1)
if out[len(out)-1] == "" {
out = out[:len(out)-1]
}
@@ -302,44 +301,59 @@ type hunk struct {
ops []*op
}
func (c *hunk) WriteTo(buf *bytes.Buffer) {
buf.WriteString(chunkStart)
func (h *hunk) writeTo(sb *strings.Builder, color ColorConfig) {
sb.WriteString(color[Frag])
sb.WriteString("@@ -")
if c.fromCount == 1 {
fmt.Fprintf(buf, "%d", c.fromLine)
if h.fromCount == 1 {
sb.WriteString(strconv.Itoa(h.fromLine))
} else {
fmt.Fprintf(buf, chunkCount, c.fromLine, c.fromCount)
sb.WriteString(strconv.Itoa(h.fromLine))
sb.WriteByte(',')
sb.WriteString(strconv.Itoa(h.fromCount))
}
buf.WriteString(chunkMiddle)
sb.WriteString(" +")
if c.toCount == 1 {
fmt.Fprintf(buf, "%d", c.toLine)
if h.toCount == 1 {
sb.WriteString(strconv.Itoa(h.toLine))
} else {
fmt.Fprintf(buf, chunkCount, c.toLine, c.toCount)
sb.WriteString(strconv.Itoa(h.toLine))
sb.WriteByte(',')
sb.WriteString(strconv.Itoa(h.toCount))
}
fmt.Fprintf(buf, chunkEnd, c.ctxPrefix)
sb.WriteString(" @@")
sb.WriteString(color.Reset(Frag))
for _, d := range c.ops {
buf.WriteString(d.String())
if h.ctxPrefix != "" {
sb.WriteByte(' ')
sb.WriteString(color[Func])
sb.WriteString(h.ctxPrefix)
sb.WriteString(color.Reset(Func))
}
sb.WriteByte('\n')
for _, op := range h.ops {
op.writeTo(sb, color)
}
}
func (c *hunk) AddOp(t Operation, s ...string) {
ls := len(s)
func (h *hunk) AddOp(t Operation, ss ...string) {
n := len(ss)
switch t {
case Add:
c.toCount += ls
h.toCount += n
case Delete:
c.fromCount += ls
h.fromCount += n
case Equal:
c.toCount += ls
c.fromCount += ls
h.toCount += n
h.fromCount += n
}
for _, l := range s {
c.ops = append(c.ops, &op{l, t})
for _, s := range ss {
h.ops = append(h.ops, &op{s, t})
}
}
@@ -348,20 +362,15 @@ type op struct {
t Operation
}
func (o *op) String() string {
var prefix, suffix string
switch o.t {
case Add:
prefix = addLine
case Delete:
prefix = deleteLine
case Equal:
prefix = equalLine
func (o *op) writeTo(sb *strings.Builder, color ColorConfig) {
colorKey := operationColorKey[o.t]
sb.WriteString(color[colorKey])
sb.WriteByte(operationChar[o.t])
if strings.HasSuffix(o.text, "\n") {
sb.WriteString(strings.TrimSuffix(o.text, "\n"))
} else {
sb.WriteString(o.text + "\n\\ No newline at end of file")
}
n := len(o.text)
if n > 0 && o.text[n-1] != '\n' {
suffix = noNewLine
}
return fmt.Sprintf(prefix, o.text, suffix)
sb.WriteString(color.Reset(colorKey))
sb.WriteByte('\n')
}

View File

@@ -4,6 +4,7 @@ import (
"bytes"
"github.com/go-git/go-git/v5/plumbing"
"github.com/go-git/go-git/v5/utils/ioutil"
)
// See https://github.com/jelmer/dulwich/blob/master/dulwich/pack.py and
@@ -27,17 +28,20 @@ func GetDelta(base, target plumbing.EncodedObject) (plumbing.EncodedObject, erro
return getDelta(new(deltaIndex), base, target)
}
func getDelta(index *deltaIndex, base, target plumbing.EncodedObject) (plumbing.EncodedObject, error) {
func getDelta(index *deltaIndex, base, target plumbing.EncodedObject) (o plumbing.EncodedObject, err error) {
br, err := base.Reader()
if err != nil {
return nil, err
}
defer br.Close()
defer ioutil.CheckClose(br, &err)
tr, err := target.Reader()
if err != nil {
return nil, err
}
defer tr.Close()
defer ioutil.CheckClose(tr, &err)
bb := bufPool.Get().(*bytes.Buffer)
defer bufPool.Put(bb)

View File

@@ -9,6 +9,7 @@ import (
"github.com/go-git/go-git/v5/plumbing"
"github.com/go-git/go-git/v5/plumbing/storer"
"github.com/go-git/go-git/v5/utils/binary"
"github.com/go-git/go-git/v5/utils/ioutil"
)
// Encoder gets the data from the storage and write it into the writer in PACK
@@ -80,7 +81,7 @@ func (e *Encoder) head(numEntries int) error {
)
}
func (e *Encoder) entry(o *ObjectToPack) error {
func (e *Encoder) entry(o *ObjectToPack) (err error) {
if o.WantWrite() {
// A cycle exists in this delta chain. This should only occur if a
// selected object representation disappeared during writing
@@ -119,17 +120,22 @@ func (e *Encoder) entry(o *ObjectToPack) error {
}
e.zw.Reset(e.w)
defer ioutil.CheckClose(e.zw, &err)
or, err := o.Object.Reader()
if err != nil {
return err
}
defer ioutil.CheckClose(or, &err)
_, err = io.Copy(e.zw, or)
if err != nil {
return err
}
return e.zw.Close()
return nil
}
func (e *Encoder) writeBaseIfDelta(o *ObjectToPack) error {

View File

@@ -10,6 +10,7 @@ import (
"github.com/go-git/go-git/v5/plumbing/cache"
"github.com/go-git/go-git/v5/plumbing/format/idxfile"
"github.com/go-git/go-git/v5/plumbing/storer"
"github.com/go-git/go-git/v5/utils/ioutil"
)
var (
@@ -307,12 +308,14 @@ func (p *Packfile) getNextMemoryObject(h *ObjectHeader) (plumbing.EncodedObject,
return obj, nil
}
func (p *Packfile) fillRegularObjectContent(obj plumbing.EncodedObject) error {
func (p *Packfile) fillRegularObjectContent(obj plumbing.EncodedObject) (err error) {
w, err := obj.Writer()
if err != nil {
return err
}
defer ioutil.CheckClose(w, &err)
_, _, err = p.s.NextObject(w)
p.cachePut(obj)

View File

@@ -4,11 +4,12 @@ import (
"bytes"
"errors"
"io"
"io/ioutil"
stdioutil "io/ioutil"
"github.com/go-git/go-git/v5/plumbing"
"github.com/go-git/go-git/v5/plumbing/cache"
"github.com/go-git/go-git/v5/plumbing/storer"
"github.com/go-git/go-git/v5/utils/ioutil"
)
var (
@@ -283,7 +284,7 @@ func (p *Parser) resolveDeltas() error {
if !obj.IsDelta() && len(obj.Children) > 0 {
for _, child := range obj.Children {
if err := p.resolveObject(ioutil.Discard, child, content); err != nil {
if err := p.resolveObject(stdioutil.Discard, child, content); err != nil {
return err
}
}
@@ -298,7 +299,7 @@ func (p *Parser) resolveDeltas() error {
return nil
}
func (p *Parser) get(o *objectInfo, buf *bytes.Buffer) error {
func (p *Parser) get(o *objectInfo, buf *bytes.Buffer) (err error) {
if !o.ExternalRef { // skip cache check for placeholder parents
b, ok := p.cache.Get(o.Offset)
if ok {
@@ -310,17 +311,21 @@ func (p *Parser) get(o *objectInfo, buf *bytes.Buffer) error {
// If it's not on the cache and is not a delta we can try to find it in the
// storage, if there's one. External refs must enter here.
if p.storage != nil && !o.Type.IsDelta() {
e, err := p.storage.EncodedObject(plumbing.AnyObject, o.SHA1)
var e plumbing.EncodedObject
e, err = p.storage.EncodedObject(plumbing.AnyObject, o.SHA1)
if err != nil {
return err
}
o.Type = e.Type()
r, err := e.Reader()
var r io.ReadCloser
r, err = e.Reader()
if err != nil {
return err
}
defer ioutil.CheckClose(r, &err)
_, err = buf.ReadFrom(io.LimitReader(r, e.Size()))
return err
}

View File

@@ -6,6 +6,7 @@ import (
"io"
"github.com/go-git/go-git/v5/plumbing"
"github.com/go-git/go-git/v5/utils/ioutil"
)
// See https://github.com/git/git/blob/49fa3dc76179e04b0833542fa52d0f287a4955ac/delta.h
@@ -16,17 +17,21 @@ import (
const deltaSizeMin = 4
// ApplyDelta writes to target the result of applying the modification deltas in delta to base.
func ApplyDelta(target, base plumbing.EncodedObject, delta []byte) error {
func ApplyDelta(target, base plumbing.EncodedObject, delta []byte) (err error) {
r, err := base.Reader()
if err != nil {
return err
}
defer ioutil.CheckClose(r, &err)
w, err := target.Writer()
if err != nil {
return err
}
defer ioutil.CheckClose(w, &err)
buf := bufPool.Get().(*bytes.Buffer)
defer bufPool.Put(buf)
buf.Reset()

View File

@@ -71,3 +71,13 @@ type HashSlice []Hash
func (p HashSlice) Len() int { return len(p) }
func (p HashSlice) Less(i, j int) bool { return bytes.Compare(p[i][:], p[j][:]) < 0 }
func (p HashSlice) Swap(i, j int) { p[i], p[j] = p[j], p[i] }
// IsHash returns true if the given string is a valid hash.
func IsHash(s string) bool {
if len(s) != 40 {
return false
}
_, err := hex.DecodeString(s)
return err == nil
}

View File

@@ -18,7 +18,7 @@ type Change struct {
To ChangeEntry
}
var empty = ChangeEntry{}
var empty ChangeEntry
// Action returns the kind of action represented by the change, an
// insertion, a deletion or a modification.
@@ -27,9 +27,11 @@ func (c *Change) Action() (merkletrie.Action, error) {
return merkletrie.Action(0),
fmt.Errorf("malformed change: empty from and to")
}
if c.From == empty {
return merkletrie.Insert, nil
}
if c.To == empty {
return merkletrie.Delete, nil
}

View File

@@ -78,21 +78,30 @@ func (c *Commit) Tree() (*Tree, error) {
// PatchContext returns the Patch between the actual commit and the provided one.
// Error will be return if context expires. Provided context must be non-nil.
//
// NOTE: Since version 5.1.0 the renames are correctly handled, the settings
// used are the recommended options DefaultDiffTreeOptions.
func (c *Commit) PatchContext(ctx context.Context, to *Commit) (*Patch, error) {
fromTree, err := c.Tree()
if err != nil {
return nil, err
}
toTree, err := to.Tree()
if err != nil {
return nil, err
var toTree *Tree
if to != nil {
toTree, err = to.Tree()
if err != nil {
return nil, err
}
}
return fromTree.PatchContext(ctx, toTree)
}
// Patch returns the Patch between the actual commit and the provided one.
//
// NOTE: Since version 5.1.0 the renames are correctly handled, the settings
// used are the recommended options DefaultDiffTreeOptions.
func (c *Commit) Patch(to *Commit) (*Patch, error) {
return c.PatchContext(context.Background(), to)
}

View File

@@ -10,14 +10,62 @@ import (
// DiffTree compares the content and mode of the blobs found via two
// tree objects.
// DiffTree does not perform rename detection, use DiffTreeWithOptions
// instead to detect renames.
func DiffTree(a, b *Tree) (Changes, error) {
return DiffTreeContext(context.Background(), a, b)
}
// DiffTree compares the content and mode of the blobs found via two
// DiffTreeContext compares the content and mode of the blobs found via two
// tree objects. Provided context must be non-nil.
// An error will be return if context expires
// An error will be returned if context expires.
func DiffTreeContext(ctx context.Context, a, b *Tree) (Changes, error) {
return DiffTreeWithOptions(ctx, a, b, nil)
}
// DiffTreeOptions are the configurable options when performing a diff tree.
type DiffTreeOptions struct {
// DetectRenames is whether the diff tree will use rename detection.
DetectRenames bool
// RenameScore is the threshold to of similarity between files to consider
// that a pair of delete and insert are a rename. The number must be
// exactly between 0 and 100.
RenameScore uint
// RenameLimit is the maximum amount of files that can be compared when
// detecting renames. The number of comparisons that have to be performed
// is equal to the number of deleted files * the number of added files.
// That means, that if 100 files were deleted and 50 files were added, 5000
// file comparisons may be needed. So, if the rename limit is 50, the number
// of both deleted and added needs to be equal or less than 50.
// A value of 0 means no limit.
RenameLimit uint
// OnlyExactRenames performs only detection of exact renames and will not perform
// any detection of renames based on file similarity.
OnlyExactRenames bool
}
// DefaultDiffTreeOptions are the default and recommended options for the
// diff tree.
var DefaultDiffTreeOptions = &DiffTreeOptions{
DetectRenames: true,
RenameScore: 60,
RenameLimit: 0,
OnlyExactRenames: false,
}
// DiffTreeWithOptions compares the content and mode of the blobs found
// via two tree objects with the given options. The provided context
// must be non-nil.
// If no options are passed, no rename detection will be performed. The
// recommended options are DefaultDiffTreeOptions.
// An error will be returned if the context expires.
// This function will be deprecated and removed in v6 so the default
// behaviour of DiffTree is to detect renames.
func DiffTreeWithOptions(
ctx context.Context,
a, b *Tree,
opts *DiffTreeOptions,
) (Changes, error) {
from := NewTreeRootNode(a)
to := NewTreeRootNode(b)
@@ -33,5 +81,18 @@ func DiffTreeContext(ctx context.Context, a, b *Tree) (Changes, error) {
return nil, err
}
return newChanges(merkletrieChanges)
changes, err := newChanges(merkletrieChanges)
if err != nil {
return nil, err
}
if opts == nil {
opts = new(DiffTreeOptions)
}
if opts.DetectRenames {
return DetectRenames(changes, opts)
}
return changes, nil
}

View File

@@ -115,7 +115,7 @@ func fileContent(f *File) (content string, isBinary bool, err error) {
return
}
// textPatch is an implementation of fdiff.Patch interface
// Patch is an implementation of fdiff.Patch interface
type Patch struct {
message string
filePatches []fdiff.FilePatch

View File

@@ -0,0 +1,813 @@
package object
import (
"errors"
"io"
"sort"
"strings"
"github.com/go-git/go-git/v5/plumbing"
"github.com/go-git/go-git/v5/plumbing/filemode"
"github.com/go-git/go-git/v5/utils/ioutil"
"github.com/go-git/go-git/v5/utils/merkletrie"
)
// DetectRenames detects the renames in the given changes on two trees with
// the given options. It will return the given changes grouping additions and
// deletions into modifications when possible.
// If options is nil, the default diff tree options will be used.
func DetectRenames(
changes Changes,
opts *DiffTreeOptions,
) (Changes, error) {
if opts == nil {
opts = DefaultDiffTreeOptions
}
detector := &renameDetector{
renameScore: int(opts.RenameScore),
renameLimit: int(opts.RenameLimit),
onlyExact: opts.OnlyExactRenames,
}
for _, c := range changes {
action, err := c.Action()
if err != nil {
return nil, err
}
switch action {
case merkletrie.Insert:
detector.added = append(detector.added, c)
case merkletrie.Delete:
detector.deleted = append(detector.deleted, c)
default:
detector.modified = append(detector.modified, c)
}
}
return detector.detect()
}
// renameDetector will detect and resolve renames in a set of changes.
// see: https://github.com/eclipse/jgit/blob/master/org.eclipse.jgit/src/org/eclipse/jgit/diff/RenameDetector.java
type renameDetector struct {
added []*Change
deleted []*Change
modified []*Change
renameScore int
renameLimit int
onlyExact bool
}
// detectExactRenames detects matches files that were deleted with files that
// were added where the hash is the same on both. If there are multiple targets
// the one with the most similar path will be chosen as the rename and the
// rest as either deletions or additions.
func (d *renameDetector) detectExactRenames() {
added := groupChangesByHash(d.added)
deletes := groupChangesByHash(d.deleted)
var uniqueAdds []*Change
var nonUniqueAdds [][]*Change
var addedLeft []*Change
for _, cs := range added {
if len(cs) == 1 {
uniqueAdds = append(uniqueAdds, cs[0])
} else {
nonUniqueAdds = append(nonUniqueAdds, cs)
}
}
for _, c := range uniqueAdds {
hash := changeHash(c)
deleted := deletes[hash]
if len(deleted) == 1 {
if sameMode(c, deleted[0]) {
d.modified = append(d.modified, &Change{From: deleted[0].From, To: c.To})
delete(deletes, hash)
} else {
addedLeft = append(addedLeft, c)
}
} else if len(deleted) > 1 {
bestMatch := bestNameMatch(c, deleted)
if bestMatch != nil && sameMode(c, bestMatch) {
d.modified = append(d.modified, &Change{From: bestMatch.From, To: c.To})
delete(deletes, hash)
var newDeletes = make([]*Change, 0, len(deleted)-1)
for _, d := range deleted {
if d != bestMatch {
newDeletes = append(newDeletes, d)
}
}
deletes[hash] = newDeletes
}
} else {
addedLeft = append(addedLeft, c)
}
}
for _, added := range nonUniqueAdds {
hash := changeHash(added[0])
deleted := deletes[hash]
if len(deleted) == 1 {
deleted := deleted[0]
bestMatch := bestNameMatch(deleted, added)
if bestMatch != nil && sameMode(deleted, bestMatch) {
d.modified = append(d.modified, &Change{From: deleted.From, To: bestMatch.To})
delete(deletes, hash)
for _, c := range added {
if c != bestMatch {
addedLeft = append(addedLeft, c)
}
}
} else {
addedLeft = append(addedLeft, added...)
}
} else if len(deleted) > 1 {
maxSize := len(deleted) * len(added)
if d.renameLimit > 0 && d.renameLimit < maxSize {
maxSize = d.renameLimit
}
matrix := make(similarityMatrix, 0, maxSize)
for delIdx, del := range deleted {
deletedName := changeName(del)
for addIdx, add := range added {
addedName := changeName(add)
score := nameSimilarityScore(addedName, deletedName)
matrix = append(matrix, similarityPair{added: addIdx, deleted: delIdx, score: score})
if len(matrix) >= maxSize {
break
}
}
if len(matrix) >= maxSize {
break
}
}
sort.Stable(matrix)
usedAdds := make(map[*Change]struct{})
usedDeletes := make(map[*Change]struct{})
for i := len(matrix) - 1; i >= 0; i-- {
del := deleted[matrix[i].deleted]
add := added[matrix[i].added]
if add == nil || del == nil {
// it was already matched
continue
}
usedAdds[add] = struct{}{}
usedDeletes[del] = struct{}{}
d.modified = append(d.modified, &Change{From: del.From, To: add.To})
added[matrix[i].added] = nil
deleted[matrix[i].deleted] = nil
}
for _, c := range added {
if _, ok := usedAdds[c]; !ok && c != nil {
addedLeft = append(addedLeft, c)
}
}
var newDeletes = make([]*Change, 0, len(deleted)-len(usedDeletes))
for _, c := range deleted {
if _, ok := usedDeletes[c]; !ok && c != nil {
newDeletes = append(newDeletes, c)
}
}
deletes[hash] = newDeletes
} else {
addedLeft = append(addedLeft, added...)
}
}
d.added = addedLeft
d.deleted = nil
for _, dels := range deletes {
d.deleted = append(d.deleted, dels...)
}
}
// detectContentRenames detects renames based on the similarity of the content
// in the files by building a matrix of pairs between sources and destinations
// and matching by the highest score.
// see: https://github.com/eclipse/jgit/blob/master/org.eclipse.jgit/src/org/eclipse/jgit/diff/SimilarityRenameDetector.java
func (d *renameDetector) detectContentRenames() error {
cnt := max(len(d.added), len(d.deleted))
if d.renameLimit > 0 && cnt > d.renameLimit {
return nil
}
srcs, dsts := d.deleted, d.added
matrix, err := buildSimilarityMatrix(srcs, dsts, d.renameScore)
if err != nil {
return err
}
renames := make([]*Change, 0, min(len(matrix), len(dsts)))
// Match rename pairs on a first come, first serve basis until
// we have looked at everything that is above the minimum score.
for i := len(matrix) - 1; i >= 0; i-- {
pair := matrix[i]
src := srcs[pair.deleted]
dst := dsts[pair.added]
if dst == nil || src == nil {
// It was already matched before
continue
}
renames = append(renames, &Change{From: src.From, To: dst.To})
// Claim destination and source as matched
dsts[pair.added] = nil
srcs[pair.deleted] = nil
}
d.modified = append(d.modified, renames...)
d.added = compactChanges(dsts)
d.deleted = compactChanges(srcs)
return nil
}
func (d *renameDetector) detect() (Changes, error) {
if len(d.added) > 0 && len(d.deleted) > 0 {
d.detectExactRenames()
if !d.onlyExact {
if err := d.detectContentRenames(); err != nil {
return nil, err
}
}
}
result := make(Changes, 0, len(d.added)+len(d.deleted)+len(d.modified))
result = append(result, d.added...)
result = append(result, d.deleted...)
result = append(result, d.modified...)
sort.Stable(result)
return result, nil
}
func bestNameMatch(change *Change, changes []*Change) *Change {
var best *Change
var bestScore int
cname := changeName(change)
for _, c := range changes {
score := nameSimilarityScore(cname, changeName(c))
if score > bestScore {
bestScore = score
best = c
}
}
return best
}
func nameSimilarityScore(a, b string) int {
aDirLen := strings.LastIndexByte(a, '/') + 1
bDirLen := strings.LastIndexByte(b, '/') + 1
dirMin := min(aDirLen, bDirLen)
dirMax := max(aDirLen, bDirLen)
var dirScoreLtr, dirScoreRtl int
if dirMax == 0 {
dirScoreLtr = 100
dirScoreRtl = 100
} else {
var dirSim int
for ; dirSim < dirMin; dirSim++ {
if a[dirSim] != b[dirSim] {
break
}
}
dirScoreLtr = dirSim * 100 / dirMax
if dirScoreLtr == 100 {
dirScoreRtl = 100
} else {
for dirSim = 0; dirSim < dirMin; dirSim++ {
if a[aDirLen-1-dirSim] != b[bDirLen-1-dirSim] {
break
}
}
dirScoreRtl = dirSim * 100 / dirMax
}
}
fileMin := min(len(a)-aDirLen, len(b)-bDirLen)
fileMax := max(len(a)-aDirLen, len(b)-bDirLen)
fileSim := 0
for ; fileSim < fileMin; fileSim++ {
if a[len(a)-1-fileSim] != b[len(b)-1-fileSim] {
break
}
}
fileScore := fileSim * 100 / fileMax
return (((dirScoreLtr + dirScoreRtl) * 25) + (fileScore * 50)) / 100
}
func changeName(c *Change) string {
if c.To != empty {
return c.To.Name
}
return c.From.Name
}
func changeHash(c *Change) plumbing.Hash {
if c.To != empty {
return c.To.TreeEntry.Hash
}
return c.From.TreeEntry.Hash
}
func changeMode(c *Change) filemode.FileMode {
if c.To != empty {
return c.To.TreeEntry.Mode
}
return c.From.TreeEntry.Mode
}
func sameMode(a, b *Change) bool {
return changeMode(a) == changeMode(b)
}
func groupChangesByHash(changes []*Change) map[plumbing.Hash][]*Change {
var result = make(map[plumbing.Hash][]*Change)
for _, c := range changes {
hash := changeHash(c)
result[hash] = append(result[hash], c)
}
return result
}
type similarityMatrix []similarityPair
func (m similarityMatrix) Len() int { return len(m) }
func (m similarityMatrix) Swap(i, j int) { m[i], m[j] = m[j], m[i] }
func (m similarityMatrix) Less(i, j int) bool {
if m[i].score == m[j].score {
if m[i].added == m[j].added {
return m[i].deleted < m[j].deleted
}
return m[i].added < m[j].added
}
return m[i].score < m[j].score
}
type similarityPair struct {
// index of the added file
added int
// index of the deleted file
deleted int
// similarity score
score int
}
func max(a, b int) int {
if a > b {
return a
}
return b
}
func min(a, b int) int {
if a < b {
return a
}
return b
}
func buildSimilarityMatrix(srcs, dsts []*Change, renameScore int) (similarityMatrix, error) {
// Allocate for the worst-case scenario where every pair has a score
// that we need to consider. We might not need that many.
matrix := make(similarityMatrix, 0, len(srcs)*len(dsts))
srcSizes := make([]int64, len(srcs))
dstSizes := make([]int64, len(dsts))
dstTooLarge := make(map[int]bool)
// Consider each pair of files, if the score is above the minimum
// threshold we need to record that scoring in the matrix so we can
// later find the best matches.
outerLoop:
for srcIdx, src := range srcs {
if changeMode(src) != filemode.Regular {
continue
}
// Declare the from file and the similarity index here to be able to
// reuse it inside the inner loop. The reason to not initialize them
// here is so we can skip the initialization in case they happen to
// not be needed later. They will be initialized inside the inner
// loop if and only if they're needed and reused in subsequent passes.
var from *File
var s *similarityIndex
var err error
for dstIdx, dst := range dsts {
if changeMode(dst) != filemode.Regular {
continue
}
if dstTooLarge[dstIdx] {
continue
}
var to *File
srcSize := srcSizes[srcIdx]
if srcSize == 0 {
from, _, err = src.Files()
if err != nil {
return nil, err
}
srcSize = from.Size + 1
srcSizes[srcIdx] = srcSize
}
dstSize := dstSizes[dstIdx]
if dstSize == 0 {
_, to, err = dst.Files()
if err != nil {
return nil, err
}
dstSize = to.Size + 1
dstSizes[dstIdx] = dstSize
}
min, max := srcSize, dstSize
if dstSize < srcSize {
min = dstSize
max = srcSize
}
if int(min*100/max) < renameScore {
// File sizes are too different to be a match
continue
}
if s == nil {
s, err = fileSimilarityIndex(from)
if err != nil {
if err == errIndexFull {
continue outerLoop
}
return nil, err
}
}
if to == nil {
_, to, err = dst.Files()
if err != nil {
return nil, err
}
}
di, err := fileSimilarityIndex(to)
if err != nil {
if err == errIndexFull {
dstTooLarge[dstIdx] = true
}
return nil, err
}
contentScore := s.score(di, 10000)
// The name score returns a value between 0 and 100, so we need to
// convert it to the same range as the content score.
nameScore := nameSimilarityScore(src.From.Name, dst.To.Name) * 100
score := (contentScore*99 + nameScore*1) / 10000
if score < renameScore {
continue
}
matrix = append(matrix, similarityPair{added: dstIdx, deleted: srcIdx, score: score})
}
}
sort.Stable(matrix)
return matrix, nil
}
func compactChanges(changes []*Change) []*Change {
var result []*Change
for _, c := range changes {
if c != nil {
result = append(result, c)
}
}
return result
}
const (
keyShift = 32
maxCountValue = (1 << keyShift) - 1
)
var errIndexFull = errors.New("index is full")
// similarityIndex is an index structure of lines/blocks in one file.
// This structure can be used to compute an approximation of the similarity
// between two files.
// To save space in memory, this index uses a space efficient encoding which
// will not exceed 1MiB per instance. The index starts out at a smaller size
// (closer to 2KiB), but may grow as more distinct blocks withing the scanned
// file are discovered.
// see: https://github.com/eclipse/jgit/blob/master/org.eclipse.jgit/src/org/eclipse/jgit/diff/SimilarityIndex.java
type similarityIndex struct {
hashed uint64
// number of non-zero entries in hashes
numHashes int
growAt int
hashes []keyCountPair
hashBits int
}
func fileSimilarityIndex(f *File) (*similarityIndex, error) {
idx := newSimilarityIndex()
if err := idx.hash(f); err != nil {
return nil, err
}
sort.Stable(keyCountPairs(idx.hashes))
return idx, nil
}
func newSimilarityIndex() *similarityIndex {
return &similarityIndex{
hashBits: 8,
hashes: make([]keyCountPair, 1<<8),
growAt: shouldGrowAt(8),
}
}
func (i *similarityIndex) hash(f *File) error {
isBin, err := f.IsBinary()
if err != nil {
return err
}
r, err := f.Reader()
if err != nil {
return err
}
defer ioutil.CheckClose(r, &err)
return i.hashContent(r, f.Size, isBin)
}
func (i *similarityIndex) hashContent(r io.Reader, size int64, isBin bool) error {
var buf = make([]byte, 4096)
var ptr, cnt int
remaining := size
for 0 < remaining {
hash := 5381
var blockHashedCnt uint64
// Hash one line or block, whatever happens first
n := int64(0)
for {
if ptr == cnt {
ptr = 0
var err error
cnt, err = io.ReadFull(r, buf)
if err != nil && err != io.ErrUnexpectedEOF {
return err
}
if cnt == 0 {
return io.EOF
}
}
n++
c := buf[ptr] & 0xff
ptr++
// Ignore CR in CRLF sequence if it's text
if !isBin && c == '\r' && ptr < cnt && buf[ptr] == '\n' {
continue
}
blockHashedCnt++
if c == '\n' {
break
}
hash = (hash << 5) + hash + int(c)
if n >= 64 || n >= remaining {
break
}
}
i.hashed += blockHashedCnt
if err := i.add(hash, blockHashedCnt); err != nil {
return err
}
remaining -= n
}
return nil
}
// score computes the similarity score between this index and another one.
// A region of a file is defined as a line in a text file or a fixed-size
// block in a binary file. To prepare an index, each region in the file is
// hashed; the values and counts of hashes are retained in a sorted table.
// Define the similarity fraction F as the count of matching regions between
// the two files divided between the maximum count of regions in either file.
// The similarity score is F multiplied by the maxScore constant, yielding a
// range [0, maxScore]. It is defined as maxScore for the degenerate case of
// two empty files.
// The similarity score is symmetrical; i.e. a.score(b) == b.score(a).
func (i *similarityIndex) score(other *similarityIndex, maxScore int) int {
var maxHashed = i.hashed
if maxHashed < other.hashed {
maxHashed = other.hashed
}
if maxHashed == 0 {
return maxScore
}
return int(i.common(other) * uint64(maxScore) / maxHashed)
}
func (i *similarityIndex) common(dst *similarityIndex) uint64 {
srcIdx, dstIdx := 0, 0
if i.numHashes == 0 || dst.numHashes == 0 {
return 0
}
var common uint64
srcKey, dstKey := i.hashes[srcIdx].key(), dst.hashes[dstIdx].key()
for {
if srcKey == dstKey {
srcCnt, dstCnt := i.hashes[srcIdx].count(), dst.hashes[dstIdx].count()
if srcCnt < dstCnt {
common += srcCnt
} else {
common += dstCnt
}
srcIdx++
if srcIdx == len(i.hashes) {
break
}
srcKey = i.hashes[srcIdx].key()
dstIdx++
if dstIdx == len(dst.hashes) {
break
}
dstKey = dst.hashes[dstIdx].key()
} else if srcKey < dstKey {
// Region of src that is not in dst
srcIdx++
if srcIdx == len(i.hashes) {
break
}
srcKey = i.hashes[srcIdx].key()
} else {
// Region of dst that is not in src
dstIdx++
if dstIdx == len(dst.hashes) {
break
}
dstKey = dst.hashes[dstIdx].key()
}
}
return common
}
func (i *similarityIndex) add(key int, cnt uint64) error {
key = int(uint32(key)*0x9e370001 >> 1)
j := i.slot(key)
for {
v := i.hashes[j]
if v == 0 {
// It's an empty slot, so we can store it here.
if i.growAt <= i.numHashes {
if err := i.grow(); err != nil {
return err
}
j = i.slot(key)
continue
}
var err error
i.hashes[j], err = newKeyCountPair(key, cnt)
if err != nil {
return err
}
i.numHashes++
return nil
} else if v.key() == key {
// It's the same key, so increment the counter.
var err error
i.hashes[j], err = newKeyCountPair(key, v.count()+cnt)
if err != nil {
return err
}
return nil
} else if j+1 >= len(i.hashes) {
j = 0
} else {
j++
}
}
}
type keyCountPair uint64
func newKeyCountPair(key int, cnt uint64) (keyCountPair, error) {
if cnt > maxCountValue {
return 0, errIndexFull
}
return keyCountPair((uint64(key) << keyShift) | cnt), nil
}
func (p keyCountPair) key() int {
return int(p >> keyShift)
}
func (p keyCountPair) count() uint64 {
return uint64(p) & maxCountValue
}
func (i *similarityIndex) slot(key int) int {
// We use 31 - hashBits because the upper bit was already forced
// to be 0 and we want the remaining high bits to be used as the
// table slot.
return int(uint32(key) >> uint(31 - i.hashBits))
}
func shouldGrowAt(hashBits int) int {
return (1 << uint(hashBits)) * (hashBits - 3) / hashBits
}
func (i *similarityIndex) grow() error {
if i.hashBits == 30 {
return errIndexFull
}
old := i.hashes
i.hashBits++
i.growAt = shouldGrowAt(i.hashBits)
// TODO(erizocosmico): find a way to check if it will OOM and return
// errIndexFull instead.
i.hashes = make([]keyCountPair, 1<<uint(i.hashBits))
for _, v := range old {
if v != 0 {
j := i.slot(v.key())
for i.hashes[j] != 0 {
j++
if j >= len(i.hashes) {
j = 0
}
}
i.hashes[j] = v
}
}
return nil
}
type keyCountPairs []keyCountPair
func (p keyCountPairs) Len() int { return len(p) }
func (p keyCountPairs) Swap(i, j int) { p[i], p[j] = p[j], p[i] }
func (p keyCountPairs) Less(i, j int) bool { return p[i] < p[j] }

View File

@@ -304,29 +304,34 @@ func (t *Tree) buildMap() {
}
// Diff returns a list of changes between this tree and the provided one
func (from *Tree) Diff(to *Tree) (Changes, error) {
return DiffTree(from, to)
func (t *Tree) Diff(to *Tree) (Changes, error) {
return t.DiffContext(context.Background(), to)
}
// Diff returns a list of changes between this tree and the provided one
// Error will be returned if context expires
// Provided context must be non nil
func (from *Tree) DiffContext(ctx context.Context, to *Tree) (Changes, error) {
return DiffTreeContext(ctx, from, to)
// DiffContext returns a list of changes between this tree and the provided one
// Error will be returned if context expires. Provided context must be non nil.
//
// NOTE: Since version 5.1.0 the renames are correctly handled, the settings
// used are the recommended options DefaultDiffTreeOptions.
func (t *Tree) DiffContext(ctx context.Context, to *Tree) (Changes, error) {
return DiffTreeWithOptions(ctx, t, to, DefaultDiffTreeOptions)
}
// Patch returns a slice of Patch objects with all the changes between trees
// in chunks. This representation can be used to create several diff outputs.
func (from *Tree) Patch(to *Tree) (*Patch, error) {
return from.PatchContext(context.Background(), to)
func (t *Tree) Patch(to *Tree) (*Patch, error) {
return t.PatchContext(context.Background(), to)
}
// Patch returns a slice of Patch objects with all the changes between trees
// in chunks. This representation can be used to create several diff outputs.
// If context expires, an error will be returned
// Provided context must be non-nil
func (from *Tree) PatchContext(ctx context.Context, to *Tree) (*Patch, error) {
changes, err := DiffTreeContext(ctx, from, to)
// PatchContext returns a slice of Patch objects with all the changes between
// trees in chunks. This representation can be used to create several diff
// outputs. If context expires, an error will be returned. Provided context must
// be non-nil.
//
// NOTE: Since version 5.1.0 the renames are correctly handled, the settings
// used are the recommended options DefaultDiffTreeOptions.
func (t *Tree) PatchContext(ctx context.Context, to *Tree) (*Patch, error) {
changes, err := t.DiffContext(ctx, to)
if err != nil {
return nil, err
}

View File

@@ -201,3 +201,11 @@ func (a *AdvRefs) addSymbolicRefs(s storer.ReferenceStorer) error {
func (a *AdvRefs) supportSymrefs() bool {
return a.Capabilities.Supports(capability.SymRef)
}
// IsEmpty returns true if doesn't contain any reference.
func (a *AdvRefs) IsEmpty() bool {
return a.Head == nil &&
len(a.References) == 0 &&
len(a.Peeled) == 0 &&
len(a.Shallows) == 0
}

View File

@@ -175,6 +175,13 @@ func (s *session) AdvertisedReferences() (*packp.AdvRefs, error) {
}
}
// Some servers like jGit, announce capabilities instead of returning an
// packp message with a flush. This verifies that we received a empty
// adv-refs, even it contains capabilities.
if !s.isReceivePack && ar.IsEmpty() {
return nil, transport.ErrEmptyRemoteRepository
}
transport.FilterUnsupportedCapabilities(ar.Capabilities)
s.advRefs = ar
return ar, nil

View File

@@ -243,11 +243,13 @@ func (s *rpSession) ReceivePack(ctx context.Context, req *packp.ReferenceUpdateR
//TODO: Implement 'atomic' update of references.
r := ioutil.NewContextReadCloser(ctx, req.Packfile)
if err := s.writePackfile(r); err != nil {
s.unpackErr = err
s.firstErr = err
return s.reportStatus(), err
if req.Packfile != nil {
r := ioutil.NewContextReadCloser(ctx, req.Packfile)
if err := s.writePackfile(r); err != nil {
s.unpackErr = err
s.firstErr = err
return s.reportStatus(), err
}
}
s.updateReferences(req)

View File

@@ -29,6 +29,7 @@ var (
NoErrAlreadyUpToDate = errors.New("already up-to-date")
ErrDeleteRefNotSupported = errors.New("server does not support delete-refs")
ErrForceNeeded = errors.New("some refs were not updated")
ErrExactSHA1NotSupported = errors.New("server does not support exact SHA1 refspec")
)
const (
@@ -122,6 +123,15 @@ func (r *Remote) PushContext(ctx context.Context, o *PushOptions) (err error) {
return ErrDeleteRefNotSupported
}
if o.Force {
for i := 0; i < len(o.RefSpecs); i++ {
rs := &o.RefSpecs[i]
if !rs.IsForceUpdate() {
o.RefSpecs[i] = config.RefSpec("+" + rs.String())
}
}
}
localRefs, err := r.references()
if err != nil {
return err
@@ -303,6 +313,10 @@ func (r *Remote) fetch(ctx context.Context, o *FetchOptions) (sto storer.Referen
return nil, err
}
if err := r.isSupportedRefSpec(o.RefSpecs, ar); err != nil {
return nil, err
}
remoteRefs, err := ar.AllReferences()
if err != nil {
return nil, err
@@ -546,6 +560,7 @@ func (r *Remote) addReferenceIfRefSpecMatches(rs config.RefSpec,
func (r *Remote) references() ([]*plumbing.Reference, error) {
var localRefs []*plumbing.Reference
iter, err := r.s.IterReferences()
if err != nil {
return nil, err
@@ -701,6 +716,11 @@ func doCalculateRefs(
return err
}
if s.IsExactSHA1() {
ref := plumbing.NewHashReference(s.Dst(""), plumbing.NewHash(s.Src()))
return refs.SetReference(ref)
}
var matched bool
err = iter.ForEach(func(ref *plumbing.Reference) error {
if !s.Match(ref.Name()) {
@@ -850,6 +870,26 @@ func (r *Remote) newUploadPackRequest(o *FetchOptions,
return req, nil
}
func (r *Remote) isSupportedRefSpec(refs []config.RefSpec, ar *packp.AdvRefs) error {
var containsIsExact bool
for _, ref := range refs {
if ref.IsExactSHA1() {
containsIsExact = true
}
}
if !containsIsExact {
return nil
}
if ar.Capabilities.Supports(capability.AllowReachableSHA1InWant) ||
ar.Capabilities.Supports(capability.AllowTipSHA1InWant) {
return nil
}
return ErrExactSHA1NotSupported
}
func buildSidebandIfSupported(l *capability.List, reader io.Reader, p sideband.Progress) io.Reader {
var t sideband.Type
@@ -883,7 +923,7 @@ func (r *Remote) updateLocalReferenceStorage(
}
for _, ref := range fetchedRefs {
if !spec.Match(ref.Name()) {
if !spec.Match(ref.Name()) && !spec.IsExactSHA1() {
continue
}

View File

@@ -13,7 +13,6 @@ import (
"strings"
"time"
"golang.org/x/crypto/openpgp"
"github.com/go-git/go-git/v5/config"
"github.com/go-git/go-git/v5/internal/revision"
"github.com/go-git/go-git/v5/plumbing"
@@ -24,6 +23,8 @@ import (
"github.com/go-git/go-git/v5/storage"
"github.com/go-git/go-git/v5/storage/filesystem"
"github.com/go-git/go-git/v5/utils/ioutil"
"github.com/imdario/mergo"
"golang.org/x/crypto/openpgp"
"github.com/go-git/go-billy/v5"
"github.com/go-git/go-billy/v5/osfs"
@@ -155,7 +156,7 @@ func setConfigWorktree(r *Repository, worktree, storage billy.Filesystem) error
return nil
}
cfg, err := r.Storer.Config()
cfg, err := r.Config()
if err != nil {
return err
}
@@ -434,14 +435,56 @@ func cleanUpDir(path string, all bool) error {
return err
}
// Config return the repository config
// Config return the repository config. In a filesystem backed repository this
// means read the `.git/config`.
func (r *Repository) Config() (*config.Config, error) {
return r.Storer.Config()
}
// SetConfig marshall and writes the repository config. In a filesystem backed
// repository this means write the `.git/config`. This function should be called
// with the result of `Repository.Config` and never with the output of
// `Repository.ConfigScoped`.
func (r *Repository) SetConfig(cfg *config.Config) error {
return r.Storer.SetConfig(cfg)
}
// ConfigScoped returns the repository config, merged with requested scope and
// lower. For example if, config.GlobalScope is given the local and global config
// are returned merged in one config value.
func (r *Repository) ConfigScoped(scope config.Scope) (*config.Config, error) {
// TODO(mcuadros): v6, add this as ConfigOptions.Scoped
var err error
system := config.NewConfig()
if scope >= config.SystemScope {
system, err = config.LoadConfig(config.SystemScope)
if err != nil {
return nil, err
}
}
global := config.NewConfig()
if scope >= config.GlobalScope {
global, err = config.LoadConfig(config.GlobalScope)
if err != nil {
return nil, err
}
}
local, err := r.Storer.Config()
if err != nil {
return nil, err
}
_ = mergo.Merge(global, system)
_ = mergo.Merge(local, global)
return local, nil
}
// Remote return a remote if exists
func (r *Repository) Remote(name string) (*Remote, error) {
cfg, err := r.Storer.Config()
cfg, err := r.Config()
if err != nil {
return nil, err
}
@@ -456,7 +499,7 @@ func (r *Repository) Remote(name string) (*Remote, error) {
// Remotes returns a list with all the remotes
func (r *Repository) Remotes() ([]*Remote, error) {
cfg, err := r.Storer.Config()
cfg, err := r.Config()
if err != nil {
return nil, err
}
@@ -480,7 +523,7 @@ func (r *Repository) CreateRemote(c *config.RemoteConfig) (*Remote, error) {
remote := NewRemote(r.Storer, c)
cfg, err := r.Storer.Config()
cfg, err := r.Config()
if err != nil {
return nil, err
}
@@ -511,7 +554,7 @@ func (r *Repository) CreateRemoteAnonymous(c *config.RemoteConfig) (*Remote, err
// DeleteRemote delete a remote from the repository and delete the config
func (r *Repository) DeleteRemote(name string) error {
cfg, err := r.Storer.Config()
cfg, err := r.Config()
if err != nil {
return err
}
@@ -526,7 +569,7 @@ func (r *Repository) DeleteRemote(name string) error {
// Branch return a Branch if exists
func (r *Repository) Branch(name string) (*config.Branch, error) {
cfg, err := r.Storer.Config()
cfg, err := r.Config()
if err != nil {
return nil, err
}
@@ -545,7 +588,7 @@ func (r *Repository) CreateBranch(c *config.Branch) error {
return err
}
cfg, err := r.Storer.Config()
cfg, err := r.Config()
if err != nil {
return err
}
@@ -560,7 +603,7 @@ func (r *Repository) CreateBranch(c *config.Branch) error {
// DeleteBranch delete a Branch from the repository and delete the config
func (r *Repository) DeleteBranch(name string) error {
cfg, err := r.Storer.Config()
cfg, err := r.Config()
if err != nil {
return err
}
@@ -835,7 +878,7 @@ func (r *Repository) cloneRefSpec(o *CloneOptions) []config.RefSpec {
}
func (r *Repository) setIsBare(isBare bool) error {
cfg, err := r.Storer.Config()
cfg, err := r.Config()
if err != nil {
return err
}
@@ -851,7 +894,7 @@ func (r *Repository) updateRemoteConfigIfNeeded(o *CloneOptions, c *config.Remot
c.Fetch = r.cloneRefSpec(o)
cfg, err := r.Storer.Config()
cfg, err := r.Config()
if err != nil {
return err
}
@@ -1541,7 +1584,7 @@ func (r *Repository) createNewObjectPack(cfg *RepackConfig) (h plumbing.Hash, er
return h, err
}
defer ioutil.CheckClose(wc, &err)
scfg, err := r.Storer.Config()
scfg, err := r.Config()
if err != nil {
return h, err
}

View File

@@ -1,7 +1,6 @@
package filesystem
import (
stdioutil "io/ioutil"
"os"
"github.com/go-git/go-git/v5/config"
@@ -14,29 +13,17 @@ type ConfigStorage struct {
}
func (c *ConfigStorage) Config() (conf *config.Config, err error) {
cfg := config.NewConfig()
f, err := c.dir.Config()
if err != nil {
if os.IsNotExist(err) {
return cfg, nil
return config.NewConfig(), nil
}
return nil, err
}
defer ioutil.CheckClose(f, &err)
b, err := stdioutil.ReadAll(f)
if err != nil {
return nil, err
}
if err = cfg.Unmarshal(b); err != nil {
return nil, err
}
return cfg, err
return config.ReadConfig(f)
}
func (c *ConfigStorage) SetConfig(cfg *config.Config) (err error) {

View File

@@ -57,6 +57,9 @@ var (
// targeting a non-existing object. This usually means the repository
// is corrupt.
ErrSymRefTargetNotFound = errors.New("symbolic reference target not found")
// ErrIsDir is returned when a reference file is attempting to be read,
// but the path specified is a directory.
ErrIsDir = errors.New("reference path is a directory")
)
// Options holds configuration for the storage.
@@ -926,6 +929,14 @@ func (d *DotGit) addRefFromHEAD(refs *[]*plumbing.Reference) error {
func (d *DotGit) readReferenceFile(path, name string) (ref *plumbing.Reference, err error) {
path = d.fs.Join(path, d.fs.Join(strings.Split(name, "/")...))
st, err := d.fs.Stat(path)
if err != nil {
return nil, err
}
if st.IsDir() {
return nil, ErrIsDir
}
f, err := d.fs.Open(path)
if err != nil {
return nil, err

View File

@@ -408,6 +408,8 @@ func (s *ObjectStorage) getFromUnpacked(h plumbing.Hash) (obj plumbing.EncodedOb
return nil, err
}
defer ioutil.CheckClose(w, &err)
s.objectCache.Put(obj)
_, err = io.Copy(w, r)

View File

@@ -35,7 +35,7 @@ func (s *Submodule) Config() *config.Submodule {
// Init initialize the submodule reading the recorded Entry in the index for
// the given submodule
func (s *Submodule) Init() error {
cfg, err := s.w.r.Storer.Config()
cfg, err := s.w.r.Config()
if err != nil {
return err
}

View File

@@ -23,7 +23,7 @@ package merkletrie
// # Cases
//
// When comparing noders in both trees you will found yourself in
// When comparing noders in both trees you will find yourself in
// one of 169 possible cases, but if we ignore moves, we can
// simplify a lot the search space into the following table:
//
@@ -256,17 +256,21 @@ import (
)
var (
// ErrCanceled is returned whenever the operation is canceled.
ErrCanceled = errors.New("operation canceled")
)
// DiffTree calculates the list of changes between two merkletries. It
// uses the provided hashEqual callback to compare noders.
func DiffTree(fromTree, toTree noder.Noder,
hashEqual noder.Equal) (Changes, error) {
func DiffTree(
fromTree,
toTree noder.Noder,
hashEqual noder.Equal,
) (Changes, error) {
return DiffTreeContext(context.Background(), fromTree, toTree, hashEqual)
}
// DiffTree calculates the list of changes between two merkletries. It
// DiffTreeContext calculates the list of changes between two merkletries. It
// uses the provided hashEqual callback to compare noders.
// Error will be returned if context expires
// Provided context must be non nil

9
vendor/github.com/google/uuid/.travis.yml generated vendored Normal file
View File

@@ -0,0 +1,9 @@
language: go
go:
- 1.4.3
- 1.5.3
- tip
script:
- go test -v ./...

10
vendor/github.com/google/uuid/CONTRIBUTING.md generated vendored Normal file
View File

@@ -0,0 +1,10 @@
# How to contribute
We definitely welcome patches and contribution to this project!
### Legal requirements
In order to protect both you and ourselves, you will need to sign the
[Contributor License Agreement](https://cla.developers.google.com/clas).
You may have already signed it for other Google projects.

9
vendor/github.com/google/uuid/CONTRIBUTORS generated vendored Normal file
View File

@@ -0,0 +1,9 @@
Paul Borman <borman@google.com>
bmatsuo
shawnps
theory
jboverfelt
dsymonds
cd1
wallclockbuilder
dansouza

27
vendor/github.com/google/uuid/LICENSE generated vendored Normal file
View File

@@ -0,0 +1,27 @@
Copyright (c) 2009,2014 Google Inc. All rights reserved.
Redistribution and use in source and binary forms, with or without
modification, are permitted provided that the following conditions are
met:
* Redistributions of source code must retain the above copyright
notice, this list of conditions and the following disclaimer.
* Redistributions in binary form must reproduce the above
copyright notice, this list of conditions and the following disclaimer
in the documentation and/or other materials provided with the
distribution.
* Neither the name of Google Inc. nor the names of its
contributors may be used to endorse or promote products derived from
this software without specific prior written permission.
THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
"AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
(INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.

19
vendor/github.com/google/uuid/README.md generated vendored Normal file
View File

@@ -0,0 +1,19 @@
# uuid ![build status](https://travis-ci.org/google/uuid.svg?branch=master)
The uuid package generates and inspects UUIDs based on
[RFC 4122](http://tools.ietf.org/html/rfc4122)
and DCE 1.1: Authentication and Security Services.
This package is based on the github.com/pborman/uuid package (previously named
code.google.com/p/go-uuid). It differs from these earlier packages in that
a UUID is a 16 byte array rather than a byte slice. One loss due to this
change is the ability to represent an invalid UUID (vs a NIL UUID).
###### Install
`go get github.com/google/uuid`
###### Documentation
[![GoDoc](https://godoc.org/github.com/google/uuid?status.svg)](http://godoc.org/github.com/google/uuid)
Full `go doc` style documentation for the package can be viewed online without
installing this package by using the GoDoc site here:
http://godoc.org/github.com/google/uuid

80
vendor/github.com/google/uuid/dce.go generated vendored Normal file
View File

@@ -0,0 +1,80 @@
// Copyright 2016 Google Inc. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
package uuid
import (
"encoding/binary"
"fmt"
"os"
)
// A Domain represents a Version 2 domain
type Domain byte
// Domain constants for DCE Security (Version 2) UUIDs.
const (
Person = Domain(0)
Group = Domain(1)
Org = Domain(2)
)
// NewDCESecurity returns a DCE Security (Version 2) UUID.
//
// The domain should be one of Person, Group or Org.
// On a POSIX system the id should be the users UID for the Person
// domain and the users GID for the Group. The meaning of id for
// the domain Org or on non-POSIX systems is site defined.
//
// For a given domain/id pair the same token may be returned for up to
// 7 minutes and 10 seconds.
func NewDCESecurity(domain Domain, id uint32) (UUID, error) {
uuid, err := NewUUID()
if err == nil {
uuid[6] = (uuid[6] & 0x0f) | 0x20 // Version 2
uuid[9] = byte(domain)
binary.BigEndian.PutUint32(uuid[0:], id)
}
return uuid, err
}
// NewDCEPerson returns a DCE Security (Version 2) UUID in the person
// domain with the id returned by os.Getuid.
//
// NewDCESecurity(Person, uint32(os.Getuid()))
func NewDCEPerson() (UUID, error) {
return NewDCESecurity(Person, uint32(os.Getuid()))
}
// NewDCEGroup returns a DCE Security (Version 2) UUID in the group
// domain with the id returned by os.Getgid.
//
// NewDCESecurity(Group, uint32(os.Getgid()))
func NewDCEGroup() (UUID, error) {
return NewDCESecurity(Group, uint32(os.Getgid()))
}
// Domain returns the domain for a Version 2 UUID. Domains are only defined
// for Version 2 UUIDs.
func (uuid UUID) Domain() Domain {
return Domain(uuid[9])
}
// ID returns the id for a Version 2 UUID. IDs are only defined for Version 2
// UUIDs.
func (uuid UUID) ID() uint32 {
return binary.BigEndian.Uint32(uuid[0:4])
}
func (d Domain) String() string {
switch d {
case Person:
return "Person"
case Group:
return "Group"
case Org:
return "Org"
}
return fmt.Sprintf("Domain%d", int(d))
}

12
vendor/github.com/google/uuid/doc.go generated vendored Normal file
View File

@@ -0,0 +1,12 @@
// Copyright 2016 Google Inc. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
// Package uuid generates and inspects UUIDs.
//
// UUIDs are based on RFC 4122 and DCE 1.1: Authentication and Security
// Services.
//
// A UUID is a 16 byte (128 bit) array. UUIDs may be used as keys to
// maps or compared directly.
package uuid

1
vendor/github.com/google/uuid/go.mod generated vendored Normal file
View File

@@ -0,0 +1 @@
module github.com/google/uuid

53
vendor/github.com/google/uuid/hash.go generated vendored Normal file
View File

@@ -0,0 +1,53 @@
// Copyright 2016 Google Inc. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
package uuid
import (
"crypto/md5"
"crypto/sha1"
"hash"
)
// Well known namespace IDs and UUIDs
var (
NameSpaceDNS = Must(Parse("6ba7b810-9dad-11d1-80b4-00c04fd430c8"))
NameSpaceURL = Must(Parse("6ba7b811-9dad-11d1-80b4-00c04fd430c8"))
NameSpaceOID = Must(Parse("6ba7b812-9dad-11d1-80b4-00c04fd430c8"))
NameSpaceX500 = Must(Parse("6ba7b814-9dad-11d1-80b4-00c04fd430c8"))
Nil UUID // empty UUID, all zeros
)
// NewHash returns a new UUID derived from the hash of space concatenated with
// data generated by h. The hash should be at least 16 byte in length. The
// first 16 bytes of the hash are used to form the UUID. The version of the
// UUID will be the lower 4 bits of version. NewHash is used to implement
// NewMD5 and NewSHA1.
func NewHash(h hash.Hash, space UUID, data []byte, version int) UUID {
h.Reset()
h.Write(space[:])
h.Write(data)
s := h.Sum(nil)
var uuid UUID
copy(uuid[:], s)
uuid[6] = (uuid[6] & 0x0f) | uint8((version&0xf)<<4)
uuid[8] = (uuid[8] & 0x3f) | 0x80 // RFC 4122 variant
return uuid
}
// NewMD5 returns a new MD5 (Version 3) UUID based on the
// supplied name space and data. It is the same as calling:
//
// NewHash(md5.New(), space, data, 3)
func NewMD5(space UUID, data []byte) UUID {
return NewHash(md5.New(), space, data, 3)
}
// NewSHA1 returns a new SHA1 (Version 5) UUID based on the
// supplied name space and data. It is the same as calling:
//
// NewHash(sha1.New(), space, data, 5)
func NewSHA1(space UUID, data []byte) UUID {
return NewHash(sha1.New(), space, data, 5)
}

37
vendor/github.com/google/uuid/marshal.go generated vendored Normal file
View File

@@ -0,0 +1,37 @@
// Copyright 2016 Google Inc. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
package uuid
import "fmt"
// MarshalText implements encoding.TextMarshaler.
func (uuid UUID) MarshalText() ([]byte, error) {
var js [36]byte
encodeHex(js[:], uuid)
return js[:], nil
}
// UnmarshalText implements encoding.TextUnmarshaler.
func (uuid *UUID) UnmarshalText(data []byte) error {
id, err := ParseBytes(data)
if err == nil {
*uuid = id
}
return err
}
// MarshalBinary implements encoding.BinaryMarshaler.
func (uuid UUID) MarshalBinary() ([]byte, error) {
return uuid[:], nil
}
// UnmarshalBinary implements encoding.BinaryUnmarshaler.
func (uuid *UUID) UnmarshalBinary(data []byte) error {
if len(data) != 16 {
return fmt.Errorf("invalid UUID (got %d bytes)", len(data))
}
copy(uuid[:], data)
return nil
}

90
vendor/github.com/google/uuid/node.go generated vendored Normal file
View File

@@ -0,0 +1,90 @@
// Copyright 2016 Google Inc. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
package uuid
import (
"sync"
)
var (
nodeMu sync.Mutex
ifname string // name of interface being used
nodeID [6]byte // hardware for version 1 UUIDs
zeroID [6]byte // nodeID with only 0's
)
// NodeInterface returns the name of the interface from which the NodeID was
// derived. The interface "user" is returned if the NodeID was set by
// SetNodeID.
func NodeInterface() string {
defer nodeMu.Unlock()
nodeMu.Lock()
return ifname
}
// SetNodeInterface selects the hardware address to be used for Version 1 UUIDs.
// If name is "" then the first usable interface found will be used or a random
// Node ID will be generated. If a named interface cannot be found then false
// is returned.
//
// SetNodeInterface never fails when name is "".
func SetNodeInterface(name string) bool {
defer nodeMu.Unlock()
nodeMu.Lock()
return setNodeInterface(name)
}
func setNodeInterface(name string) bool {
iname, addr := getHardwareInterface(name) // null implementation for js
if iname != "" && addr != nil {
ifname = iname
copy(nodeID[:], addr)
return true
}
// We found no interfaces with a valid hardware address. If name
// does not specify a specific interface generate a random Node ID
// (section 4.1.6)
if name == "" {
ifname = "random"
randomBits(nodeID[:])
return true
}
return false
}
// NodeID returns a slice of a copy of the current Node ID, setting the Node ID
// if not already set.
func NodeID() []byte {
defer nodeMu.Unlock()
nodeMu.Lock()
if nodeID == zeroID {
setNodeInterface("")
}
nid := nodeID
return nid[:]
}
// SetNodeID sets the Node ID to be used for Version 1 UUIDs. The first 6 bytes
// of id are used. If id is less than 6 bytes then false is returned and the
// Node ID is not set.
func SetNodeID(id []byte) bool {
if len(id) < 6 {
return false
}
defer nodeMu.Unlock()
nodeMu.Lock()
copy(nodeID[:], id)
ifname = "user"
return true
}
// NodeID returns the 6 byte node id encoded in uuid. It returns nil if uuid is
// not valid. The NodeID is only well defined for version 1 and 2 UUIDs.
func (uuid UUID) NodeID() []byte {
var node [6]byte
copy(node[:], uuid[10:])
return node[:]
}

12
vendor/github.com/google/uuid/node_js.go generated vendored Normal file
View File

@@ -0,0 +1,12 @@
// Copyright 2017 Google Inc. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
// +build js
package uuid
// getHardwareInterface returns nil values for the JS version of the code.
// This remvoves the "net" dependency, because it is not used in the browser.
// Using the "net" library inflates the size of the transpiled JS code by 673k bytes.
func getHardwareInterface(name string) (string, []byte) { return "", nil }

33
vendor/github.com/google/uuid/node_net.go generated vendored Normal file
View File

@@ -0,0 +1,33 @@
// Copyright 2017 Google Inc. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
// +build !js
package uuid
import "net"
var interfaces []net.Interface // cached list of interfaces
// getHardwareInterface returns the name and hardware address of interface name.
// If name is "" then the name and hardware address of one of the system's
// interfaces is returned. If no interfaces are found (name does not exist or
// there are no interfaces) then "", nil is returned.
//
// Only addresses of at least 6 bytes are returned.
func getHardwareInterface(name string) (string, []byte) {
if interfaces == nil {
var err error
interfaces, err = net.Interfaces()
if err != nil {
return "", nil
}
}
for _, ifs := range interfaces {
if len(ifs.HardwareAddr) >= 6 && (name == "" || name == ifs.Name) {
return ifs.Name, ifs.HardwareAddr
}
}
return "", nil
}

59
vendor/github.com/google/uuid/sql.go generated vendored Normal file
View File

@@ -0,0 +1,59 @@
// Copyright 2016 Google Inc. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
package uuid
import (
"database/sql/driver"
"fmt"
)
// Scan implements sql.Scanner so UUIDs can be read from databases transparently
// Currently, database types that map to string and []byte are supported. Please
// consult database-specific driver documentation for matching types.
func (uuid *UUID) Scan(src interface{}) error {
switch src := src.(type) {
case nil:
return nil
case string:
// if an empty UUID comes from a table, we return a null UUID
if src == "" {
return nil
}
// see Parse for required string format
u, err := Parse(src)
if err != nil {
return fmt.Errorf("Scan: %v", err)
}
*uuid = u
case []byte:
// if an empty UUID comes from a table, we return a null UUID
if len(src) == 0 {
return nil
}
// assumes a simple slice of bytes if 16 bytes
// otherwise attempts to parse
if len(src) != 16 {
return uuid.Scan(string(src))
}
copy((*uuid)[:], src)
default:
return fmt.Errorf("Scan: unable to scan type %T into UUID", src)
}
return nil
}
// Value implements sql.Valuer so that UUIDs can be written to databases
// transparently. Currently, UUIDs map to strings. Please consult
// database-specific driver documentation for matching types.
func (uuid UUID) Value() (driver.Value, error) {
return uuid.String(), nil
}

Some files were not shown because too many files have changed in this diff Show More