Compare commits

...

17 Commits

Author SHA1 Message Date
6543
f460b7543e Changelog v1.16.4 (#19081) 2022-03-14 21:55:33 +01:00
6543
1cb649525d Restrict email address validation (#17688) (#19085)
Co-authored-by: Lunny Xiao <xiaolunwen@gmail.com>
2022-03-14 20:51:58 +01:00
6543
99861e3e06 Fix lfs bug (#19072) (#19080)
Co-authored-by: Lunny Xiao <xiaolunwen@gmail.com>
2022-03-14 15:59:54 +01:00
Gusted
66b8a43e5f Refactor mirror code & fix StartToMirror (#18904) (#19075)
- Backport #18904.
2022-03-14 20:04:41 +08:00
zeripath
d285905826 Update the webauthn_credential_id_sequence in Postgres (#19048) (#19060)
Backport #19048

There is (yet) another problem with v210 in that Postgres will silently allow preset
ID insertions ... but it will not update the sequence value.

This PR simply adds a little step to the end of the v210 migration to update the
sequence number.

Users who have already migrated who find that they cannot insert new
webauthn_credentials into the DB can either run:

```bash
gitea doctor recreate-table webauthn_credential
```

or

```bash
SELECT setval('webauthn_credential_id_seq', COALESCE((SELECT MAX(id)+1 FROM `webauthn_credential`), 1), false)
```

which will fix the bad sequence.

Fix #19012

Signed-off-by: Andrew Thornton <art27@cantab.net>

Co-authored-by: 6543 <6543@obermui.de>
2022-03-13 12:02:19 +08:00
zeripath
4df2320ba6 Prevent 500 when there is an error during new auth source post (#19041) (#19059)
Backport #19041

Fix #19036

Signed-off-by: Andrew Thornton <art27@cantab.net>
2022-03-13 03:42:31 +01:00
zeripath
0fe99cc00c If rendering has failed due to a net.OpError stop rendering (attempt 2) (#19049) (#19056)
Backport #19049

Unfortunately #18642 does not work because a `*net.OpError` does not implement
the `Is` interface to make `errors.Is` work correctly - thus leading to the
irritating conclusion that a `*net.OpError` is not a `*net.OpError`.

Here we keep the `errors.Is` because presumably this will be fixed at
some point in the golang main source code but also we add a simply type
cast to also check.

Fix #18629

Signed-off-by: Andrew Thornton <art27@cantab.net>
2022-03-10 22:13:55 +01:00
Norwin
580401ecbf Fix flag validation (#19046) (#19051)
Regression from #5785
2022-03-10 20:23:55 +00:00
zeripath
7aa29720f0 Improve SyncMirrors logging (#19045) (#19050)
Backport #19045

Yet another issue has come up where the logging from SyncMirrors does not provide
enough context. This PR adds more context to these logging events.

Related #19038

Signed-off-by: Andrew Thornton <art27@cantab.net>
2022-03-10 16:06:35 +01:00
6543
3e5c844a77 fix pam authorization (#19040) (#19047)
Backport #19040 

The PAM module has previously only checked the results of the authentication module.

However, in normal PAM practice most users will expect account module authorization to also be checked. Without doing this check in almost every configuration expired accounts and accounts with expired passwords will still be able to login.

This is likely to represent a significant gotcha in most configurations and cause most users configurations to be potentially insecure. Therefore we should add in the account authorization check.

## ⚠️ **BREAKING** ⚠️ 

Users of the PAM module who rely on account modules not being checked will need to change their PAM configuration.

However, as it is likely that the vast majority of users of PAM will be expecting account authorization to be checked in addition to authentication we should make this breaking change to make the default behaviour correct for the majority.

---

I suggest we backport this despite the BREAKING nature because of the surprising nature of this.

Thanks to @ysf for bringing this to our attention.


Co-authored-by: zeripath <art27@cantab.net>
Co-authored-by: ysf <34326+ysf@users.noreply.github.com>
2022-03-10 08:15:35 +00:00
zeripath
4047c5c068 Ignore missing comment for user notifications (#18954) (#19043) 2022-03-10 01:48:27 -05:00
zeripath
03d924238c Set rel="nofollow noindex" on new issue links (#19023) (#19042)
Backport #19023

Fix #19018

Signed-off-by: Andrew Thornton <art27@cantab.net>
2022-03-09 23:01:30 +00:00
Lunny Xiao
bc1248ed9e Upgrading binding package (#19034) (#19035)
Backport #19034

Fix #18855
2022-03-09 18:07:46 +00:00
zeripath
dd52c08b74 Don't show context cancelled errors in attribute reader (#19006) (#19027)
Backport #19006

Fix #18997

Signed-off-by: Andrew Thornton <art27@cantab.net>
Co-authored-by: wxiaoguang <wxiaoguang@gmail.com>

Co-authored-by: wxiaoguang <wxiaoguang@gmail.com>
2022-03-08 19:20:37 +08:00
Lunny Xiao
b811b819e2 Fix update hint bug (#19002) 2022-03-04 18:28:17 +00:00
Otto Richter (fnetX)
da985b25ce Fix potential assignee query for repo (#18994) (#18999)
* Fix potential assignee query for repo

* Add tests for `GetRepoAssignees`

- As per https://github.com/go-gitea/gitea/pull/18994#issuecomment-1058506640

Co-authored-by: Gusted <williamzijl7@hotmail.com>
2022-03-05 00:12:34 +08:00
6543
ae9c51df7c allow overwrite artifacts for github releases (#18987) (#18988) 2022-03-03 16:18:55 +01:00
36 changed files with 354 additions and 95 deletions

View File

@@ -804,11 +804,12 @@ steps:
depends_on: [gpg-sign] depends_on: [gpg-sign]
- name: github - name: github
image: plugins/github-release:1 image: plugins/github-release:latest
pull: always pull: always
settings: settings:
files: files:
- "dist/release/*" - "dist/release/*"
file_exists: overwrite
environment: environment:
GITHUB_TOKEN: GITHUB_TOKEN:
from_secret: github_token from_secret: github_token

View File

@@ -4,6 +4,28 @@ This changelog goes through all the changes that have been made in each release
without substantial changes to our git log; to see the highlights of what has without substantial changes to our git log; to see the highlights of what has
been added to each release, please refer to the [blog](https://blog.gitea.io). been added to each release, please refer to the [blog](https://blog.gitea.io).
## [1.16.4](https://github.com/go-gitea/gitea/releases/tag/v1.16.4) - 2022-03-14
* SECURITY
* Restrict email address validation (#17688) (#19085)
* Fix lfs bug (#19072) (#19080)
* ENHANCEMENTS
* Improve SyncMirrors logging (#19045) (#19050)
* BUGFIXES
* Refactor mirror code & fix `StartToMirror` (#18904) (#19075)
* Update the webauthn_credential_id_sequence in Postgres (#19048) (#19060)
* Prevent 500 when there is an error during new auth source post (#19041) (#19059)
* If rendering has failed due to a net.OpError stop rendering (attempt 2) (#19049) (#19056)
* Fix flag validation (#19046) (#19051)
* Add pam account authorization check (#19040) (#19047)
* Ignore missing comment for user notifications (#18954) (#19043)
* Set `rel="nofollow noindex"` on new issue links (#19023) (#19042)
* Upgrading binding package (#19034) (#19035)
* Don't show context cancelled errors in attribute reader (#19006) (#19027)
* Fix update hint bug (#18996) (#19002)
* MISC
* Fix potential assignee query for repo (#18994) (#18999)
## [1.16.3](https://github.com/go-gitea/gitea/releases/tag/v1.16.3) - 2022-03-02 ## [1.16.3](https://github.com/go-gitea/gitea/releases/tag/v1.16.3) - 2022-03-02
* SECURITY * SECURITY

View File

@@ -31,7 +31,7 @@ func argsSet(c *cli.Context, args ...string) error {
return errors.New(a + " is not set") return errors.New(a + " is not set")
} }
if util.IsEmptyString(a) { if util.IsEmptyString(c.String(a)) {
return errors.New(a + " is required") return errors.New(a + " is required")
} }
} }

View File

@@ -42,7 +42,7 @@ To maintain understandable code and avoid circular dependencies it is important
- `modules/setting`: Store all system configurations read from ini files and has been referenced by everywhere. But they should be used as function parameters when possible. - `modules/setting`: Store all system configurations read from ini files and has been referenced by everywhere. But they should be used as function parameters when possible.
- `modules/git`: Package to interactive with `Git` command line or Gogit package. - `modules/git`: Package to interactive with `Git` command line or Gogit package.
- `public`: Compiled frontend files (javascript, images, css, etc.) - `public`: Compiled frontend files (javascript, images, css, etc.)
- `routers`: Handling of server requests. As it uses other Gitea packages to serve the request, other packages (models, modules or services) shall not depend on routers. - `routers`: Handling of server requests. As it uses other Gitea packages to serve the request, other packages (models, modules or services) must not depend on routers.
- `routers/api` Contains routers for `/api/v1` aims to handle RESTful API requests. - `routers/api` Contains routers for `/api/v1` aims to handle RESTful API requests.
- `routers/install` Could only respond when system is in INSTALL mode (INSTALL_LOCK=false). - `routers/install` Could only respond when system is in INSTALL mode (INSTALL_LOCK=false).
- `routers/private` will only be invoked by internal sub commands, especially `serv` and `hooks`. - `routers/private` will only be invoked by internal sub commands, especially `serv` and `hooks`.
@@ -106,10 +106,20 @@ i.e. `servcies/user`, `models/repository`.
Since there are some packages which use the same package name, it is possible that you find packages like `modules/user`, `models/user`, and `services/user`. When these packages are imported in one Go file, it's difficult to know which package we are using and if it's a variable name or an import name. So, we always recommend to use import aliases. To differ from package variables which are commonly in camelCase, just use **snake_case** for import aliases. Since there are some packages which use the same package name, it is possible that you find packages like `modules/user`, `models/user`, and `services/user`. When these packages are imported in one Go file, it's difficult to know which package we are using and if it's a variable name or an import name. So, we always recommend to use import aliases. To differ from package variables which are commonly in camelCase, just use **snake_case** for import aliases.
i.e. `import user_service "code.gitea.io/gitea/services/user"` i.e. `import user_service "code.gitea.io/gitea/services/user"`
### Important Gotchas
- Never write `x.Update(exemplar)` without an explicit `WHERE` clause:
- This will cause all rows in the table to be updated with the non-zero values of the exemplar - including IDs.
- You should usually write `x.ID(id).Update(exemplar)`.
- If during a migration you are inserting into a table using `x.Insert(exemplar)` where the ID is preset:
- You will need to ``SET IDENTITY_INSERT `table` ON`` for the MSSQL variant (the migration will fail otherwise)
- However, you will also need to update the id sequence for postgres - the migration will silently pass here but later insertions will fail:
``SELECT setval('table_name_id_seq', COALESCE((SELECT MAX(id)+1 FROM `table_name`), 1), false)``
### Future Tasks ### Future Tasks
Currently, we are creating some refactors to do the following things: Currently, we are creating some refactors to do the following things:
- Correct that codes which doesn't follow the rules. - Correct that codes which doesn't follow the rules.
- There are too many files in `models`, so we are moving some of them into a sub package `models/xxx`. - There are too many files in `models`, so we are moving some of them into a sub package `models/xxx`.
- Some `modules` sub packages should be moved to `services` because they depends on `models`. - Some `modules` sub packages should be moved to `services` because they depend on `models`.

2
go.mod
View File

@@ -6,7 +6,7 @@ require (
cloud.google.com/go v0.78.0 // indirect cloud.google.com/go v0.78.0 // indirect
code.gitea.io/gitea-vet v0.2.1 code.gitea.io/gitea-vet v0.2.1
code.gitea.io/sdk/gitea v0.15.1 code.gitea.io/sdk/gitea v0.15.1
gitea.com/go-chi/binding v0.0.0-20211013065440-d16dc407c2be gitea.com/go-chi/binding v0.0.0-20220309004920-114340dabecb
gitea.com/go-chi/cache v0.0.0-20211013020926-78790b11abf1 gitea.com/go-chi/cache v0.0.0-20211013020926-78790b11abf1
gitea.com/go-chi/captcha v0.0.0-20211013065431-70641c1a35d5 gitea.com/go-chi/captcha v0.0.0-20211013065431-70641c1a35d5
gitea.com/go-chi/session v0.0.0-20211218221615-e3605d8b28b8 gitea.com/go-chi/session v0.0.0-20211218221615-e3605d8b28b8

7
go.sum
View File

@@ -41,8 +41,8 @@ code.gitea.io/gitea-vet v0.2.1/go.mod h1:zcNbT/aJEmivCAhfmkHOlT645KNOf9W2KnkLgFj
code.gitea.io/sdk/gitea v0.15.1 h1:WJreC7YYuxbn0UDaPuWIe/mtiNKTvLN8MLkaw71yx/M= code.gitea.io/sdk/gitea v0.15.1 h1:WJreC7YYuxbn0UDaPuWIe/mtiNKTvLN8MLkaw71yx/M=
code.gitea.io/sdk/gitea v0.15.1/go.mod h1:klY2LVI3s3NChzIk/MzMn7G1FHrfU7qd63iSMVoHRBA= code.gitea.io/sdk/gitea v0.15.1/go.mod h1:klY2LVI3s3NChzIk/MzMn7G1FHrfU7qd63iSMVoHRBA=
dmitri.shuralyov.com/gpu/mtl v0.0.0-20190408044501-666a987793e9/go.mod h1:H6x//7gZCb22OMCxBHrMx7a5I7Hp++hsVxbQ4BYO7hU= dmitri.shuralyov.com/gpu/mtl v0.0.0-20190408044501-666a987793e9/go.mod h1:H6x//7gZCb22OMCxBHrMx7a5I7Hp++hsVxbQ4BYO7hU=
gitea.com/go-chi/binding v0.0.0-20211013065440-d16dc407c2be h1:IzSwPVzd2hE6e67ujY8ReBCrQ5IFNd0uiBmC7Ux5IaY= gitea.com/go-chi/binding v0.0.0-20220309004920-114340dabecb h1:Yy0Bxzc8R2wxiwXoG/rECGplJUSpXqCsog9PuJFgiHs=
gitea.com/go-chi/binding v0.0.0-20211013065440-d16dc407c2be/go.mod h1:/vR0YjlusOYvosKYW7QKcSnrY0nPLe4RQ/DGi3+i/Do= gitea.com/go-chi/binding v0.0.0-20220309004920-114340dabecb/go.mod h1:77TZu701zMXWJFvB8gvTbQ92zQ3DQq/H7l5wAEjQRKc=
gitea.com/go-chi/cache v0.0.0-20210110083709-82c4c9ce2d5e/go.mod h1:k2V/gPDEtXGjjMGuBJiapffAXTv76H4snSmlJRLUhH0= gitea.com/go-chi/cache v0.0.0-20210110083709-82c4c9ce2d5e/go.mod h1:k2V/gPDEtXGjjMGuBJiapffAXTv76H4snSmlJRLUhH0=
gitea.com/go-chi/cache v0.0.0-20211013020926-78790b11abf1 h1:Z7DcvTkxt8ovcENgPsQ7xzrGNSQmmIjGS9fJEb1l8jk= gitea.com/go-chi/cache v0.0.0-20211013020926-78790b11abf1 h1:Z7DcvTkxt8ovcENgPsQ7xzrGNSQmmIjGS9fJEb1l8jk=
gitea.com/go-chi/cache v0.0.0-20211013020926-78790b11abf1/go.mod h1:k2V/gPDEtXGjjMGuBJiapffAXTv76H4snSmlJRLUhH0= gitea.com/go-chi/cache v0.0.0-20211013020926-78790b11abf1/go.mod h1:k2V/gPDEtXGjjMGuBJiapffAXTv76H4snSmlJRLUhH0=
@@ -489,8 +489,9 @@ github.com/gobuffalo/packr/v2 v2.2.0/go.mod h1:CaAwI0GPIAv+5wKLtv8Afwl+Cm78K/I/V
github.com/gobuffalo/syncx v0.0.0-20190224160051-33c29581e754/go.mod h1:HhnNqWY95UYwwW3uSASeV7vtgYkT2t16hJgV3AEPUpw= github.com/gobuffalo/syncx v0.0.0-20190224160051-33c29581e754/go.mod h1:HhnNqWY95UYwwW3uSASeV7vtgYkT2t16hJgV3AEPUpw=
github.com/gobwas/glob v0.2.3 h1:A4xDbljILXROh+kObIiy5kIaPYD8e96x1tgBhUI5J+Y= github.com/gobwas/glob v0.2.3 h1:A4xDbljILXROh+kObIiy5kIaPYD8e96x1tgBhUI5J+Y=
github.com/gobwas/glob v0.2.3/go.mod h1:d3Ez4x06l9bZtSvzIay5+Yzi0fmZzPgnTbPcKjJAkT8= github.com/gobwas/glob v0.2.3/go.mod h1:d3Ez4x06l9bZtSvzIay5+Yzi0fmZzPgnTbPcKjJAkT8=
github.com/goccy/go-json v0.7.4 h1:B44qRUFwz/vxPKPISQ1KhvzRi9kZ28RAf6YtjriBZ5k=
github.com/goccy/go-json v0.7.4/go.mod h1:6MelG93GURQebXPDq3khkgXZkazVtN9CRI+MGFi0w8I= github.com/goccy/go-json v0.7.4/go.mod h1:6MelG93GURQebXPDq3khkgXZkazVtN9CRI+MGFi0w8I=
github.com/goccy/go-json v0.9.5 h1:ooSMW526ZjK+EaL5elrSyN2EzIfi/3V0m4+HJEDYLik=
github.com/goccy/go-json v0.9.5/go.mod h1:6MelG93GURQebXPDq3khkgXZkazVtN9CRI+MGFi0w8I=
github.com/gofrs/uuid v4.2.0+incompatible/go.mod h1:b2aQJv3Z4Fp6yNu3cdSllBxTCLRxnplIgP/c0N/04lM= github.com/gofrs/uuid v4.2.0+incompatible/go.mod h1:b2aQJv3Z4Fp6yNu3cdSllBxTCLRxnplIgP/c0N/04lM=
github.com/gogo/googleapis v1.1.0/go.mod h1:gf4bu3Q80BeJ6H1S1vYPm8/ELATdvryBaNFGgqEef3s= github.com/gogo/googleapis v1.1.0/go.mod h1:gf4bu3Q80BeJ6H1S1vYPm8/ELATdvryBaNFGgqEef3s=
github.com/gogo/protobuf v1.1.1/go.mod h1:r8qH/GZQm5c6nD/R0oafs1akxWv10x8SbQlK7atdtwQ= github.com/gogo/protobuf v1.1.1/go.mod h1:r8qH/GZQm5c6nD/R0oafs1akxWv10x8SbQlK7atdtwQ=

View File

@@ -193,12 +193,13 @@ func LFSAutoAssociate(metas []*LFSMetaObject, user *user_model.User, repoID int6
// admin can associate any LFS object to any repository, and we do not care about errors (eg: duplicated unique key), // admin can associate any LFS object to any repository, and we do not care about errors (eg: duplicated unique key),
// even if error occurs, it won't hurt users and won't make things worse // even if error occurs, it won't hurt users and won't make things worse
for i := range metas { for i := range metas {
p := lfs.Pointer{Oid: metas[i].Oid, Size: metas[i].Size}
_, err = sess.Insert(&LFSMetaObject{ _, err = sess.Insert(&LFSMetaObject{
Pointer: lfs.Pointer{Oid: metas[i].Oid, Size: metas[i].Size}, Pointer: p,
RepositoryID: repoID, RepositoryID: repoID,
}) })
if err != nil { if err != nil {
log.Warn("failed to insert LFS meta object into database, err=%v", err) log.Warn("failed to insert LFS meta object %-v for repo_id: %d into database, err=%v", p, repoID, err)
} }
} }
} }

View File

@@ -174,5 +174,11 @@ func remigrateU2FCredentials(x *xorm.Engine) error {
regs = regs[:0] regs = regs[:0]
} }
if x.Dialect().URI().DBType == schemas.POSTGRES {
if _, err := x.Exec("SELECT setval('webauthn_credential_id_seq', COALESCE((SELECT MAX(id)+1 FROM `webauthn_credential`), 1), false)"); err != nil {
return err
}
}
return nil return nil
} }

View File

@@ -498,14 +498,15 @@ func (n *Notification) APIURL() string {
type NotificationList []*Notification type NotificationList []*Notification
// LoadAttributes load Repo Issue User and Comment if not loaded // LoadAttributes load Repo Issue User and Comment if not loaded
func (nl NotificationList) LoadAttributes() (err error) { func (nl NotificationList) LoadAttributes() error {
var err error
for i := 0; i < len(nl); i++ { for i := 0; i < len(nl); i++ {
err = nl[i].LoadAttributes() err = nl[i].LoadAttributes()
if err != nil && !IsErrCommentNotExist(err) { if err != nil && !IsErrCommentNotExist(err) {
return return err
} }
} }
return return nil
} }
func (nl NotificationList) getPendingRepoIDs() []int64 { func (nl NotificationList) getPendingRepoIDs() []int64 {

View File

@@ -153,7 +153,7 @@ func getRepoAssignees(ctx context.Context, repo *repo_model.Repository) (_ []*us
userIDs := make([]int64, 0, 10) userIDs := make([]int64, 0, 10)
if err = e.Table("access"). if err = e.Table("access").
Where("repo_id = ? AND mode >= ?", repo.ID, perm.AccessModeWrite). Where("repo_id = ? AND mode >= ?", repo.ID, perm.AccessModeWrite).
Select("id"). Select("user_id").
Find(&userIDs); err != nil { Find(&userIDs); err != nil {
return nil, err return nil, err
} }

View File

@@ -167,3 +167,21 @@ func TestLinkedRepository(t *testing.T) {
}) })
} }
} }
func TestRepoAssignees(t *testing.T) {
assert.NoError(t, unittest.PrepareTestDatabase())
repo2 := unittest.AssertExistsAndLoadBean(t, &repo_model.Repository{ID: 2}).(*repo_model.Repository)
users, err := GetRepoAssignees(repo2)
assert.NoError(t, err)
assert.Len(t, users, 1)
assert.Equal(t, users[0].ID, int64(2))
repo21 := unittest.AssertExistsAndLoadBean(t, &repo_model.Repository{ID: 21}).(*repo_model.Repository)
users, err = GetRepoAssignees(repo21)
assert.NoError(t, err)
assert.Len(t, users, 3)
assert.Equal(t, users[0].ID, int64(15))
assert.Equal(t, users[1].ID, int64(18))
assert.Equal(t, users[2].ID, int64(16))
}

View File

@@ -10,6 +10,7 @@ import (
"errors" "errors"
"fmt" "fmt"
"net/mail" "net/mail"
"regexp"
"strings" "strings"
"code.gitea.io/gitea/models/db" "code.gitea.io/gitea/models/db"
@@ -21,10 +22,23 @@ import (
"xorm.io/builder" "xorm.io/builder"
) )
var ( // ErrEmailNotActivated e-mail address has not been activated error
// ErrEmailNotActivated e-mail address has not been activated error var ErrEmailNotActivated = errors.New("e-mail address has not been activated")
ErrEmailNotActivated = errors.New("E-mail address has not been activated")
) // ErrEmailCharIsNotSupported e-mail address contains unsupported character
type ErrEmailCharIsNotSupported struct {
Email string
}
// IsErrEmailCharIsNotSupported checks if an error is an ErrEmailCharIsNotSupported
func IsErrEmailCharIsNotSupported(err error) bool {
_, ok := err.(ErrEmailCharIsNotSupported)
return ok
}
func (err ErrEmailCharIsNotSupported) Error() string {
return fmt.Sprintf("e-mail address contains unsupported character [email: %s]", err.Email)
}
// ErrEmailInvalid represents an error where the email address does not comply with RFC 5322 // ErrEmailInvalid represents an error where the email address does not comply with RFC 5322
type ErrEmailInvalid struct { type ErrEmailInvalid struct {
@@ -108,12 +122,24 @@ func (email *EmailAddress) BeforeInsert() {
} }
} }
var emailRegexp = regexp.MustCompile("^[a-zA-Z0-9.!#$%&'*+-/=?^_`{|}~]*@[a-zA-Z0-9](?:[a-zA-Z0-9-]{0,61}[a-zA-Z0-9])?(?:\\.[a-zA-Z0-9](?:[a-zA-Z0-9-]{0,61}[a-zA-Z0-9])?)*$")
// ValidateEmail check if email is a allowed address // ValidateEmail check if email is a allowed address
func ValidateEmail(email string) error { func ValidateEmail(email string) error {
if len(email) == 0 { if len(email) == 0 {
return nil return nil
} }
if !emailRegexp.MatchString(email) {
return ErrEmailCharIsNotSupported{email}
}
if !(email[0] >= 'a' && email[0] <= 'z') &&
!(email[0] >= 'A' && email[0] <= 'Z') &&
!(email[0] >= '0' && email[0] <= '9') {
return ErrEmailInvalid{email}
}
if _, err := mail.ParseAddress(email); err != nil { if _, err := mail.ParseAddress(email); err != nil {
return ErrEmailInvalid{email} return ErrEmailInvalid{email}
} }

View File

@@ -252,3 +252,58 @@ func TestListEmails(t *testing.T) {
assert.Len(t, emails, 5) assert.Len(t, emails, 5)
assert.Greater(t, count, int64(len(emails))) assert.Greater(t, count, int64(len(emails)))
} }
func TestEmailAddressValidate(t *testing.T) {
kases := map[string]error{
"abc@gmail.com": nil,
"132@hotmail.com": nil,
"1-3-2@test.org": nil,
"1.3.2@test.org": nil,
"a_123@test.org.cn": nil,
`first.last@iana.org`: nil,
`first!last@iana.org`: nil,
`first#last@iana.org`: nil,
`first$last@iana.org`: nil,
`first%last@iana.org`: nil,
`first&last@iana.org`: nil,
`first'last@iana.org`: nil,
`first*last@iana.org`: nil,
`first+last@iana.org`: nil,
`first/last@iana.org`: nil,
`first=last@iana.org`: nil,
`first?last@iana.org`: nil,
`first^last@iana.org`: nil,
"first`last@iana.org": nil,
`first{last@iana.org`: nil,
`first|last@iana.org`: nil,
`first}last@iana.org`: nil,
`first~last@iana.org`: nil,
`first;last@iana.org`: ErrEmailCharIsNotSupported{`first;last@iana.org`},
".233@qq.com": ErrEmailInvalid{".233@qq.com"},
"!233@qq.com": ErrEmailInvalid{"!233@qq.com"},
"#233@qq.com": ErrEmailInvalid{"#233@qq.com"},
"$233@qq.com": ErrEmailInvalid{"$233@qq.com"},
"%233@qq.com": ErrEmailInvalid{"%233@qq.com"},
"&233@qq.com": ErrEmailInvalid{"&233@qq.com"},
"'233@qq.com": ErrEmailInvalid{"'233@qq.com"},
"*233@qq.com": ErrEmailInvalid{"*233@qq.com"},
"+233@qq.com": ErrEmailInvalid{"+233@qq.com"},
"/233@qq.com": ErrEmailInvalid{"/233@qq.com"},
"=233@qq.com": ErrEmailInvalid{"=233@qq.com"},
"?233@qq.com": ErrEmailInvalid{"?233@qq.com"},
"^233@qq.com": ErrEmailInvalid{"^233@qq.com"},
"`233@qq.com": ErrEmailInvalid{"`233@qq.com"},
"{233@qq.com": ErrEmailInvalid{"{233@qq.com"},
"|233@qq.com": ErrEmailInvalid{"|233@qq.com"},
"}233@qq.com": ErrEmailInvalid{"}233@qq.com"},
"~233@qq.com": ErrEmailInvalid{"~233@qq.com"},
";233@qq.com": ErrEmailCharIsNotSupported{";233@qq.com"},
"Foo <foo@bar.com>": ErrEmailCharIsNotSupported{"Foo <foo@bar.com>"},
string([]byte{0xE2, 0x84, 0xAA}): ErrEmailCharIsNotSupported{string([]byte{0xE2, 0x84, 0xAA})},
}
for kase, err := range kases {
t.Run(kase, func(t *testing.T) {
assert.EqualValues(t, err, ValidateEmail(kase))
})
}
}

View File

@@ -644,6 +644,15 @@ func CreateUser(u *User, overwriteDefault ...*CreateUserOverwriteOptions) (err e
u.Visibility = overwriteDefault[0].Visibility u.Visibility = overwriteDefault[0].Visibility
} }
// validate data
if err := validateUser(u); err != nil {
return err
}
if err := ValidateEmail(u.Email); err != nil {
return err
}
ctx, committer, err := db.TxContext() ctx, committer, err := db.TxContext()
if err != nil { if err != nil {
return err return err
@@ -652,11 +661,6 @@ func CreateUser(u *User, overwriteDefault ...*CreateUserOverwriteOptions) (err e
sess := db.GetEngine(ctx) sess := db.GetEngine(ctx)
// validate data
if err := validateUser(u); err != nil {
return err
}
isExist, err := isUserExist(sess, 0, u.Name) isExist, err := isUserExist(sess, 0, u.Name)
if err != nil { if err != nil {
return err return err

View File

@@ -232,7 +232,7 @@ func TestCreateUserInvalidEmail(t *testing.T) {
err := CreateUser(user) err := CreateUser(user)
assert.Error(t, err) assert.Error(t, err)
assert.True(t, IsErrEmailInvalid(err)) assert.True(t, IsErrEmailCharIsNotSupported(err))
} }
func TestCreateUserEmailAlreadyUsed(t *testing.T) { func TestCreateUserEmailAlreadyUsed(t *testing.T) {

View File

@@ -36,6 +36,10 @@ func Auth(serviceName, userName, passwd string) (string, error) {
return "", err return "", err
} }
if err = t.AcctMgmt(0); err != nil {
return "", err
}
// PAM login names might suffer transformations in the PAM stack. // PAM login names might suffer transformations in the PAM stack.
// We should take whatever the PAM stack returns for it. // We should take whatever the PAM stack returns for it.
return t.GetItem(pam.User) return t.GetItem(pam.User)

View File

@@ -266,7 +266,7 @@ func (ctx *Context) ServerError(logMsg string, logErr error) {
func (ctx *Context) serverErrorInternal(logMsg string, logErr error) { func (ctx *Context) serverErrorInternal(logMsg string, logErr error) {
if logErr != nil { if logErr != nil {
log.ErrorWithSkip(2, "%s: %v", logMsg, logErr) log.ErrorWithSkip(2, "%s: %v", logMsg, logErr)
if errors.Is(logErr, &net.OpError{}) { if _, ok := logErr.(*net.OpError); ok || errors.Is(logErr, &net.OpError{}) {
// This is an error within the underlying connection // This is an error within the underlying connection
// and further rendering will not work so just return // and further rendering will not work so just return
return return

View File

@@ -191,7 +191,9 @@ func (c *CheckAttributeReader) Run() error {
} }
return nil return nil
}) })
if err != nil && c.ctx.Err() != nil && err.Error() != "signal: killed" { if err != nil && // If there is an error we need to return but:
c.ctx.Err() != err && // 1. Ignore the context error if the context is cancelled or exceeds the deadline (RunWithContext could return c.ctx.Err() which is Canceled or DeadlineExceeded)
err.Error() != "signal: killed" { // 2. We should not pass up errors due to the program being killed
return fmt.Errorf("failed to run attr-check. Error: %w\nStderr: %s", err, stdErr.String()) return fmt.Errorf("failed to run attr-check. Error: %w\nStderr: %s", err, stdErr.String())
} }
return nil return nil

View File

@@ -14,6 +14,8 @@ import (
"regexp" "regexp"
"strconv" "strconv"
"strings" "strings"
"code.gitea.io/gitea/modules/log"
) )
const ( const (
@@ -111,6 +113,17 @@ func (p Pointer) RelativePath() string {
return path.Join(p.Oid[0:2], p.Oid[2:4], p.Oid[4:]) return path.Join(p.Oid[0:2], p.Oid[2:4], p.Oid[4:])
} }
// ColorFormat provides a basic color format for a Team
func (p Pointer) ColorFormat(s fmt.State) {
if p.Oid == "" && p.Size == 0 {
log.ColorFprintf(s, "<empty>")
return
}
log.ColorFprintf(s, "%s:%d",
log.NewColoredIDValue(p.Oid),
p.Size)
}
// GeneratePointer generates a pointer for arbitrary content // GeneratePointer generates a pointer for arbitrary content
func GeneratePointer(content io.Reader) (Pointer, error) { func GeneratePointer(content io.Reader) (Pointer, error) {
h := sha256.New() h := sha256.New()

View File

@@ -254,7 +254,7 @@ func SyncReleasesWithTags(repo *repo_model.Repository, gitRepo *git.Repository)
opts.Page = page opts.Page = page
rels, err := models.GetReleasesByRepoID(repo.ID, opts) rels, err := models.GetReleasesByRepoID(repo.ID, opts)
if err != nil { if err != nil {
return fmt.Errorf("GetReleasesByRepoID: %v", err) return fmt.Errorf("unable to GetReleasesByRepoID in Repo[%d:%s/%s]: %w", repo.ID, repo.OwnerName, repo.Name, err)
} }
if len(rels) == 0 { if len(rels) == 0 {
break break
@@ -265,11 +265,11 @@ func SyncReleasesWithTags(repo *repo_model.Repository, gitRepo *git.Repository)
} }
commitID, err := gitRepo.GetTagCommitID(rel.TagName) commitID, err := gitRepo.GetTagCommitID(rel.TagName)
if err != nil && !git.IsErrNotExist(err) { if err != nil && !git.IsErrNotExist(err) {
return fmt.Errorf("GetTagCommitID: %s: %v", rel.TagName, err) return fmt.Errorf("unable to GetTagCommitID for %q in Repo[%d:%s/%s]: %w", rel.TagName, repo.ID, repo.OwnerName, repo.Name, err)
} }
if git.IsErrNotExist(err) || commitID != rel.Sha1 { if git.IsErrNotExist(err) || commitID != rel.Sha1 {
if err := models.PushUpdateDeleteTag(repo, rel.TagName); err != nil { if err := models.PushUpdateDeleteTag(repo, rel.TagName); err != nil {
return fmt.Errorf("PushUpdateDeleteTag: %s: %v", rel.TagName, err) return fmt.Errorf("unable to PushUpdateDeleteTag: %q in Repo[%d:%s/%s]: %w", rel.TagName, repo.ID, repo.OwnerName, repo.Name, err)
} }
} else { } else {
existingRelTags[strings.ToLower(rel.TagName)] = struct{}{} existingRelTags[strings.ToLower(rel.TagName)] = struct{}{}
@@ -278,12 +278,12 @@ func SyncReleasesWithTags(repo *repo_model.Repository, gitRepo *git.Repository)
} }
tags, err := gitRepo.GetTags(0, 0) tags, err := gitRepo.GetTags(0, 0)
if err != nil { if err != nil {
return fmt.Errorf("GetTags: %v", err) return fmt.Errorf("unable to GetTags in Repo[%d:%s/%s]: %w", repo.ID, repo.OwnerName, repo.Name, err)
} }
for _, tagName := range tags { for _, tagName := range tags {
if _, ok := existingRelTags[strings.ToLower(tagName)]; !ok { if _, ok := existingRelTags[strings.ToLower(tagName)]; !ok {
if err := PushUpdateAddTag(repo, gitRepo, tagName); err != nil { if err := PushUpdateAddTag(repo, gitRepo, tagName); err != nil {
return fmt.Errorf("pushUpdateAddTag: %v", err) return fmt.Errorf("unable to PushUpdateAddTag: %q to Repo[%d:%s/%s]: %w", tagName, repo.ID, repo.OwnerName, repo.Name, err)
} }
} }
} }
@@ -294,11 +294,11 @@ func SyncReleasesWithTags(repo *repo_model.Repository, gitRepo *git.Repository)
func PushUpdateAddTag(repo *repo_model.Repository, gitRepo *git.Repository, tagName string) error { func PushUpdateAddTag(repo *repo_model.Repository, gitRepo *git.Repository, tagName string) error {
tag, err := gitRepo.GetTag(tagName) tag, err := gitRepo.GetTag(tagName)
if err != nil { if err != nil {
return fmt.Errorf("GetTag: %v", err) return fmt.Errorf("unable to GetTag: %w", err)
} }
commit, err := tag.Commit(gitRepo) commit, err := tag.Commit(gitRepo)
if err != nil { if err != nil {
return fmt.Errorf("Commit: %v", err) return fmt.Errorf("unable to get tag Commit: %w", err)
} }
sig := tag.Tagger sig := tag.Tagger
@@ -315,14 +315,14 @@ func PushUpdateAddTag(repo *repo_model.Repository, gitRepo *git.Repository, tagN
if sig != nil { if sig != nil {
author, err = user_model.GetUserByEmail(sig.Email) author, err = user_model.GetUserByEmail(sig.Email)
if err != nil && !user_model.IsErrUserNotExist(err) { if err != nil && !user_model.IsErrUserNotExist(err) {
return fmt.Errorf("GetUserByEmail: %v", err) return fmt.Errorf("unable to GetUserByEmail for %q: %w", sig.Email, err)
} }
createdAt = sig.When createdAt = sig.When
} }
commitsCount, err := commit.CommitsCount() commitsCount, err := commit.CommitsCount()
if err != nil { if err != nil {
return fmt.Errorf("CommitsCount: %v", err) return fmt.Errorf("unable to get CommitsCount: %w", err)
} }
var rel = models.Release{ var rel = models.Release{
@@ -359,14 +359,14 @@ func StoreMissingLfsObjectsInRepository(ctx context.Context, repo *repo_model.Re
_, err := models.NewLFSMetaObject(&models.LFSMetaObject{Pointer: p, RepositoryID: repo.ID}) _, err := models.NewLFSMetaObject(&models.LFSMetaObject{Pointer: p, RepositoryID: repo.ID})
if err != nil { if err != nil {
log.Error("Error creating LFS meta object %v: %v", p, err) log.Error("Repo[%-v]: Error creating LFS meta object %-v: %v", repo, p, err)
return err return err
} }
if err := contentStore.Put(p, content); err != nil { if err := contentStore.Put(p, content); err != nil {
log.Error("Error storing content for LFS meta object %v: %v", p, err) log.Error("Repo[%-v]: Error storing content for LFS meta object %-v: %v", repo, p, err)
if _, err2 := models.RemoveLFSMetaObjectByOid(repo.ID, p.Oid); err2 != nil { if _, err2 := models.RemoveLFSMetaObjectByOid(repo.ID, p.Oid); err2 != nil {
log.Error("Error removing LFS meta object %v: %v", p, err2) log.Error("Repo[%-v]: Error removing LFS meta object %-v: %v", repo, p, err2)
} }
return err return err
} }
@@ -386,32 +386,32 @@ func StoreMissingLfsObjectsInRepository(ctx context.Context, repo *repo_model.Re
for pointerBlob := range pointerChan { for pointerBlob := range pointerChan {
meta, err := models.GetLFSMetaObjectByOid(repo.ID, pointerBlob.Oid) meta, err := models.GetLFSMetaObjectByOid(repo.ID, pointerBlob.Oid)
if err != nil && err != models.ErrLFSObjectNotExist { if err != nil && err != models.ErrLFSObjectNotExist {
log.Error("Error querying LFS meta object %v: %v", pointerBlob.Pointer, err) log.Error("Repo[%-v]: Error querying LFS meta object %-v: %v", repo, pointerBlob.Pointer, err)
return err return err
} }
if meta != nil { if meta != nil {
log.Trace("Skipping unknown LFS meta object %v", pointerBlob.Pointer) log.Trace("Repo[%-v]: Skipping unknown LFS meta object %-v", repo, pointerBlob.Pointer)
continue continue
} }
log.Trace("LFS object %v not present in repository %s", pointerBlob.Pointer, repo.FullName()) log.Trace("Repo[%-v]: LFS object %-v not present in repository", repo, pointerBlob.Pointer)
exist, err := contentStore.Exists(pointerBlob.Pointer) exist, err := contentStore.Exists(pointerBlob.Pointer)
if err != nil { if err != nil {
log.Error("Error checking if LFS object %v exists: %v", pointerBlob.Pointer, err) log.Error("Repo[%-v]: Error checking if LFS object %-v exists: %v", repo, pointerBlob.Pointer, err)
return err return err
} }
if exist { if exist {
log.Trace("LFS object %v already present; creating meta object", pointerBlob.Pointer) log.Trace("Repo[%-v]: LFS object %-v already present; creating meta object", repo, pointerBlob.Pointer)
_, err := models.NewLFSMetaObject(&models.LFSMetaObject{Pointer: pointerBlob.Pointer, RepositoryID: repo.ID}) _, err := models.NewLFSMetaObject(&models.LFSMetaObject{Pointer: pointerBlob.Pointer, RepositoryID: repo.ID})
if err != nil { if err != nil {
log.Error("Error creating LFS meta object %v: %v", pointerBlob.Pointer, err) log.Error("Repo[%-v]: Error creating LFS meta object %-v: %v", repo, pointerBlob.Pointer, err)
return err return err
} }
} else { } else {
if setting.LFS.MaxFileSize > 0 && pointerBlob.Size > setting.LFS.MaxFileSize { if setting.LFS.MaxFileSize > 0 && pointerBlob.Size > setting.LFS.MaxFileSize {
log.Info("LFS object %v download denied because of LFS_MAX_FILE_SIZE=%d < size %d", pointerBlob.Pointer, setting.LFS.MaxFileSize, pointerBlob.Size) log.Info("Repo[%-v]: LFS object %-v download denied because of LFS_MAX_FILE_SIZE=%d < size %d", repo, pointerBlob.Pointer, setting.LFS.MaxFileSize, pointerBlob.Size)
continue continue
} }
@@ -432,7 +432,7 @@ func StoreMissingLfsObjectsInRepository(ctx context.Context, repo *repo_model.Re
err, has := <-errChan err, has := <-errChan
if has { if has {
log.Error("Error enumerating LFS objects for repository: %v", err) log.Error("Repo[%-v]: Error enumerating LFS objects for repository: %v", repo, err)
return err return err
} }

View File

@@ -6,18 +6,21 @@ package storage
import ( import (
"context" "context"
"errors"
"io" "io"
"net/url" "net/url"
"os" "os"
"path"
"path/filepath" "path/filepath"
"strings"
"code.gitea.io/gitea/modules/log" "code.gitea.io/gitea/modules/log"
"code.gitea.io/gitea/modules/util" "code.gitea.io/gitea/modules/util"
) )
var ( // ErrLocalPathNotSupported represents an error that path is not supported
_ ObjectStorage = &LocalStorage{} var ErrLocalPathNotSupported = errors.New("local path is not supported")
) var _ ObjectStorage = &LocalStorage{}
// LocalStorageType is the type descriptor for local storage // LocalStorageType is the type descriptor for local storage
const LocalStorageType Type = "local" const LocalStorageType Type = "local"
@@ -61,11 +64,18 @@ func NewLocalStorage(ctx context.Context, cfg interface{}) (ObjectStorage, error
// Open a file // Open a file
func (l *LocalStorage) Open(path string) (Object, error) { func (l *LocalStorage) Open(path string) (Object, error) {
if !isLocalPathValid(path) {
return nil, ErrLocalPathNotSupported
}
return os.Open(filepath.Join(l.dir, path)) return os.Open(filepath.Join(l.dir, path))
} }
// Save a file // Save a file
func (l *LocalStorage) Save(path string, r io.Reader, size int64) (int64, error) { func (l *LocalStorage) Save(path string, r io.Reader, size int64) (int64, error) {
if !isLocalPathValid(path) {
return 0, ErrLocalPathNotSupported
}
p := filepath.Join(l.dir, path) p := filepath.Join(l.dir, path)
if err := os.MkdirAll(filepath.Dir(p), os.ModePerm); err != nil { if err := os.MkdirAll(filepath.Dir(p), os.ModePerm); err != nil {
return 0, err return 0, err
@@ -109,8 +119,19 @@ func (l *LocalStorage) Stat(path string) (os.FileInfo, error) {
return os.Stat(filepath.Join(l.dir, path)) return os.Stat(filepath.Join(l.dir, path))
} }
func isLocalPathValid(p string) bool {
a := path.Clean(p)
if strings.HasPrefix(a, "../") || strings.HasPrefix(a, "..\\") {
return false
}
return a == p
}
// Delete delete a file // Delete delete a file
func (l *LocalStorage) Delete(path string) error { func (l *LocalStorage) Delete(path string) error {
if !isLocalPathValid(path) {
return ErrLocalPathNotSupported
}
p := filepath.Join(l.dir, path) p := filepath.Join(l.dir, path)
return util.Remove(p) return util.Remove(p)
} }

View File

@@ -0,0 +1,45 @@
// Copyright 2022 The Gitea Authors. All rights reserved.
// Use of this source code is governed by a MIT-style
// license that can be found in the LICENSE file.
package storage
import (
"testing"
"github.com/stretchr/testify/assert"
)
func TestLocalPathIsValid(t *testing.T) {
kases := []struct {
path string
valid bool
}{
{
"a/0/a0eebc99-9c0b-4ef8-bb6d-6bb9bd380a14",
true,
},
{
"../a/0/a0eebc99-9c0b-4ef8-bb6d-6bb9bd380a14",
false,
},
{
"a\\0\\a0eebc99-9c0b-4ef8-bb6d-6bb9bd380a14",
true,
},
{
"b/../a/0/a0eebc99-9c0b-4ef8-bb6d-6bb9bd380a14",
false,
},
{
"..\\a/0/a0eebc99-9c0b-4ef8-bb6d-6bb9bd380a14",
false,
},
}
for _, k := range kases {
t.Run(k.path, func(t *testing.T) {
assert.EqualValues(t, k.valid, isLocalPathValid(k.path))
})
}
}

View File

@@ -119,6 +119,7 @@ func CreateUser(ctx *context.APIContext) {
user_model.IsErrEmailAlreadyUsed(err) || user_model.IsErrEmailAlreadyUsed(err) ||
db.IsErrNameReserved(err) || db.IsErrNameReserved(err) ||
db.IsErrNameCharsNotAllowed(err) || db.IsErrNameCharsNotAllowed(err) ||
user_model.IsErrEmailCharIsNotSupported(err) ||
user_model.IsErrEmailInvalid(err) || user_model.IsErrEmailInvalid(err) ||
db.IsErrNamePatternNotAllowed(err) { db.IsErrNamePatternNotAllowed(err) {
ctx.Error(http.StatusUnprocessableEntity, "", err) ctx.Error(http.StatusUnprocessableEntity, "", err)
@@ -265,7 +266,9 @@ func EditUser(ctx *context.APIContext) {
} }
if err := user_model.UpdateUser(u, emailChanged); err != nil { if err := user_model.UpdateUser(u, emailChanged); err != nil {
if user_model.IsErrEmailAlreadyUsed(err) || user_model.IsErrEmailInvalid(err) { if user_model.IsErrEmailAlreadyUsed(err) ||
user_model.IsErrEmailCharIsNotSupported(err) ||
user_model.IsErrEmailInvalid(err) {
ctx.Error(http.StatusUnprocessableEntity, "", err) ctx.Error(http.StatusUnprocessableEntity, "", err)
} else { } else {
ctx.Error(http.StatusInternalServerError, "UpdateUser", err) ctx.Error(http.StatusInternalServerError, "UpdateUser", err)

View File

@@ -121,7 +121,7 @@ func ListRepoNotifications(ctx *context.APIContext) {
return return
} }
err = nl.LoadAttributes() err = nl.LoadAttributes()
if err != nil && !models.IsErrCommentNotExist(err) { if err != nil {
ctx.InternalServerError(err) ctx.InternalServerError(err)
return return
} }

View File

@@ -80,7 +80,8 @@ func AddEmail(ctx *context.APIContext) {
if err := user_model.AddEmailAddresses(emails); err != nil { if err := user_model.AddEmailAddresses(emails); err != nil {
if user_model.IsErrEmailAlreadyUsed(err) { if user_model.IsErrEmailAlreadyUsed(err) {
ctx.Error(http.StatusUnprocessableEntity, "", "Email address has been used: "+err.(user_model.ErrEmailAlreadyUsed).Email) ctx.Error(http.StatusUnprocessableEntity, "", "Email address has been used: "+err.(user_model.ErrEmailAlreadyUsed).Email)
} else if user_model.IsErrEmailInvalid(err) { } else if user_model.IsErrEmailCharIsNotSupported(err) ||
user_model.IsErrEmailInvalid(err) {
errMsg := fmt.Sprintf("Email address %s invalid", err.(user_model.ErrEmailInvalid).Email) errMsg := fmt.Sprintf("Email address %s invalid", err.(user_model.ErrEmailInvalid).Email)
ctx.Error(http.StatusUnprocessableEntity, "", errMsg) ctx.Error(http.StatusUnprocessableEntity, "", errMsg)
} else { } else {

View File

@@ -93,7 +93,7 @@ func NewAuthSource(ctx *context.Context) {
ctx.Data["PageIsAdmin"] = true ctx.Data["PageIsAdmin"] = true
ctx.Data["PageIsAdminAuthentications"] = true ctx.Data["PageIsAdminAuthentications"] = true
ctx.Data["type"] = auth.LDAP ctx.Data["type"] = auth.LDAP.Int()
ctx.Data["CurrentTypeName"] = auth.Names[auth.LDAP] ctx.Data["CurrentTypeName"] = auth.Names[auth.LDAP]
ctx.Data["CurrentSecurityProtocol"] = ldap.SecurityProtocolNames[ldap.SecurityProtocolUnencrypted] ctx.Data["CurrentSecurityProtocol"] = ldap.SecurityProtocolNames[ldap.SecurityProtocolUnencrypted]
ctx.Data["smtp_auth"] = "PLAIN" ctx.Data["smtp_auth"] = "PLAIN"
@@ -112,7 +112,7 @@ func NewAuthSource(ctx *context.Context) {
ctx.Data["SSPIDefaultLanguage"] = "" ctx.Data["SSPIDefaultLanguage"] = ""
// only the first as default // only the first as default
ctx.Data["oauth2_provider"] = oauth2providers[0] ctx.Data["oauth2_provider"] = oauth2providers[0].Name
ctx.HTML(http.StatusOK, tplAuthNew) ctx.HTML(http.StatusOK, tplAuthNew)
} }

View File

@@ -171,6 +171,9 @@ func NewUserPost(ctx *context.Context) {
case user_model.IsErrEmailAlreadyUsed(err): case user_model.IsErrEmailAlreadyUsed(err):
ctx.Data["Err_Email"] = true ctx.Data["Err_Email"] = true
ctx.RenderWithErr(ctx.Tr("form.email_been_used"), tplUserNew, &form) ctx.RenderWithErr(ctx.Tr("form.email_been_used"), tplUserNew, &form)
case user_model.IsErrEmailCharIsNotSupported(err):
ctx.Data["Err_Email"] = true
ctx.RenderWithErr(ctx.Tr("form.email_invalid"), tplUserNew, &form)
case user_model.IsErrEmailInvalid(err): case user_model.IsErrEmailInvalid(err):
ctx.Data["Err_Email"] = true ctx.Data["Err_Email"] = true
ctx.RenderWithErr(ctx.Tr("form.email_invalid"), tplUserNew, &form) ctx.RenderWithErr(ctx.Tr("form.email_invalid"), tplUserNew, &form)
@@ -386,7 +389,8 @@ func EditUserPost(ctx *context.Context) {
if user_model.IsErrEmailAlreadyUsed(err) { if user_model.IsErrEmailAlreadyUsed(err) {
ctx.Data["Err_Email"] = true ctx.Data["Err_Email"] = true
ctx.RenderWithErr(ctx.Tr("form.email_been_used"), tplUserEdit, &form) ctx.RenderWithErr(ctx.Tr("form.email_been_used"), tplUserEdit, &form)
} else if user_model.IsErrEmailInvalid(err) { } else if user_model.IsErrEmailCharIsNotSupported(err) ||
user_model.IsErrEmailInvalid(err) {
ctx.Data["Err_Email"] = true ctx.Data["Err_Email"] = true
ctx.RenderWithErr(ctx.Tr("form.email_invalid"), tplUserEdit, &form) ctx.RenderWithErr(ctx.Tr("form.email_invalid"), tplUserEdit, &form)
} else { } else {

View File

@@ -573,6 +573,9 @@ func createUserInContext(ctx *context.Context, tpl base.TplName, form interface{
case user_model.IsErrEmailAlreadyUsed(err): case user_model.IsErrEmailAlreadyUsed(err):
ctx.Data["Err_Email"] = true ctx.Data["Err_Email"] = true
ctx.RenderWithErr(ctx.Tr("form.email_been_used"), tpl, form) ctx.RenderWithErr(ctx.Tr("form.email_been_used"), tpl, form)
case user_model.IsErrEmailCharIsNotSupported(err):
ctx.Data["Err_Email"] = true
ctx.RenderWithErr(ctx.Tr("form.email_invalid"), tpl, form)
case user_model.IsErrEmailInvalid(err): case user_model.IsErrEmailInvalid(err):
ctx.Data["Err_Email"] = true ctx.Data["Err_Email"] = true
ctx.RenderWithErr(ctx.Tr("form.email_invalid"), tpl, form) ctx.RenderWithErr(ctx.Tr("form.email_invalid"), tpl, form)

View File

@@ -253,6 +253,13 @@ func LFSFileGet(ctx *context.Context) {
} }
ctx.Data["LFSFilesLink"] = ctx.Repo.RepoLink + "/settings/lfs" ctx.Data["LFSFilesLink"] = ctx.Repo.RepoLink + "/settings/lfs"
oid := ctx.Params("oid") oid := ctx.Params("oid")
p := lfs.Pointer{Oid: oid}
if !p.IsValid() {
ctx.NotFound("LFSFileGet", nil)
return
}
ctx.Data["Title"] = oid ctx.Data["Title"] = oid
ctx.Data["PageIsSettingsLFS"] = true ctx.Data["PageIsSettingsLFS"] = true
meta, err := models.GetLFSMetaObjectByOid(ctx.Repo.Repository.ID, oid) meta, err := models.GetLFSMetaObjectByOid(ctx.Repo.Repository.ID, oid)
@@ -343,6 +350,12 @@ func LFSDelete(ctx *context.Context) {
return return
} }
oid := ctx.Params("oid") oid := ctx.Params("oid")
p := lfs.Pointer{Oid: oid}
if !p.IsValid() {
ctx.NotFound("LFSDelete", nil)
return
}
count, err := models.RemoveLFSMetaObjectByOid(ctx.Repo.Repository.ID, oid) count, err := models.RemoveLFSMetaObjectByOid(ctx.Repo.Repository.ID, oid)
if err != nil { if err != nil {
ctx.ServerError("LFSDelete", err) ctx.ServerError("LFSDelete", err)

View File

@@ -188,7 +188,8 @@ func EmailPost(ctx *context.Context) {
ctx.RenderWithErr(ctx.Tr("form.email_been_used"), tplSettingsAccount, &form) ctx.RenderWithErr(ctx.Tr("form.email_been_used"), tplSettingsAccount, &form)
return return
} else if user_model.IsErrEmailInvalid(err) { } else if user_model.IsErrEmailCharIsNotSupported(err) ||
user_model.IsErrEmailInvalid(err) {
loadAccountData(ctx) loadAccountData(ctx)
ctx.RenderWithErr(ctx.Tr("form.email_invalid"), tplSettingsAccount, &form) ctx.RenderWithErr(ctx.Tr("form.email_invalid"), tplSettingsAccount, &form)

View File

@@ -30,18 +30,22 @@ const (
// SyncRequest for the mirror queue // SyncRequest for the mirror queue
type SyncRequest struct { type SyncRequest struct {
Type SyncType Type SyncType
RepoID int64 ReferenceID int64 // RepoID for pull mirror, MirrorID fro push mirror
} }
// doMirrorSync causes this request to mirror itself // doMirrorSync causes this request to mirror itself
func doMirrorSync(ctx context.Context, req *SyncRequest) { func doMirrorSync(ctx context.Context, req *SyncRequest) {
if req.ReferenceID == 0 {
log.Warn("Skipping mirror sync request, no reference ID was specified")
return
}
switch req.Type { switch req.Type {
case PushMirrorType: case PushMirrorType:
_ = SyncPushMirror(ctx, req.RepoID) _ = SyncPushMirror(ctx, req.ReferenceID)
case PullMirrorType: case PullMirrorType:
_ = SyncPullMirror(ctx, req.RepoID) _ = SyncPullMirror(ctx, req.ReferenceID)
default: default:
log.Error("Unknown Request type in queue: %v for RepoID[%d]", req.Type, req.RepoID) log.Error("Unknown Request type in queue: %v for ReferenceID[%d]", req.Type, req.ReferenceID)
} }
} }
@@ -68,7 +72,7 @@ func Update(ctx context.Context, pullLimit, pushLimit int) error {
repo = m.Repo repo = m.Repo
item = SyncRequest{ item = SyncRequest{
Type: PullMirrorType, Type: PullMirrorType,
RepoID: m.RepoID, ReferenceID: m.RepoID,
} }
} else if m, ok := bean.(*repo_model.PushMirror); ok { } else if m, ok := bean.(*repo_model.PushMirror); ok {
if m.Repo == nil { if m.Repo == nil {
@@ -78,7 +82,7 @@ func Update(ctx context.Context, pullLimit, pushLimit int) error {
repo = m.Repo repo = m.Repo
item = SyncRequest{ item = SyncRequest{
Type: PushMirrorType, Type: PushMirrorType,
RepoID: m.RepoID, ReferenceID: m.ID,
} }
} else { } else {
log.Error("Unknown bean: %v", bean) log.Error("Unknown bean: %v", bean)
@@ -163,7 +167,7 @@ func StartToMirror(repoID int64) {
go func() { go func() {
err := mirrorQueue.Push(&SyncRequest{ err := mirrorQueue.Push(&SyncRequest{
Type: PullMirrorType, Type: PullMirrorType,
RepoID: repoID, ReferenceID: repoID,
}) })
if err != nil { if err != nil {
log.Error("Unable to push sync request for to the queue for push mirror repo[%d]: Error: %v", repoID, err) log.Error("Unable to push sync request for to the queue for push mirror repo[%d]: Error: %v", repoID, err)
@@ -179,7 +183,7 @@ func AddPushMirrorToQueue(mirrorID int64) {
go func() { go func() {
err := mirrorQueue.Push(&SyncRequest{ err := mirrorQueue.Push(&SyncRequest{
Type: PushMirrorType, Type: PushMirrorType,
RepoID: mirrorID, ReferenceID: mirrorID,
}) })
if err != nil { if err != nil {
log.Error("Unable to push sync request to the queue for pull mirror repo[%d]: Error: %v", mirrorID, err) log.Error("Unable to push sync request to the queue for pull mirror repo[%d]: Error: %v", mirrorID, err)

View File

@@ -196,7 +196,7 @@ func runSync(ctx context.Context, m *repo_model.Mirror) ([]*mirrorSyncResult, bo
remoteAddr, remoteErr := git.GetRemoteAddress(ctx, repoPath, m.GetRemoteName()) remoteAddr, remoteErr := git.GetRemoteAddress(ctx, repoPath, m.GetRemoteName())
if remoteErr != nil { if remoteErr != nil {
log.Error("GetRemoteAddress Error %v", remoteErr) log.Error("SyncMirrors [repo: %-v]: GetRemoteAddress Error %v", m.Repo, remoteErr)
} }
stdoutBuilder := strings.Builder{} stdoutBuilder := strings.Builder{}
@@ -215,7 +215,7 @@ func runSync(ctx context.Context, m *repo_model.Mirror) ([]*mirrorSyncResult, bo
// Now check if the error is a resolve reference due to broken reference // Now check if the error is a resolve reference due to broken reference
if strings.Contains(stderr, "unable to resolve reference") && strings.Contains(stderr, "reference broken") { if strings.Contains(stderr, "unable to resolve reference") && strings.Contains(stderr, "reference broken") {
log.Warn("Failed to update mirror repository %-v due to broken references:\nStdout: %s\nStderr: %s\nErr: %v\nAttempting Prune", m.Repo, stdoutMessage, stderrMessage, err) log.Warn("SyncMirrors [repo: %-v]: failed to update mirror repository due to broken references:\nStdout: %s\nStderr: %s\nErr: %v\nAttempting Prune", m.Repo, stdoutMessage, stderrMessage, err)
err = nil err = nil
// Attempt prune // Attempt prune
@@ -240,7 +240,7 @@ func runSync(ctx context.Context, m *repo_model.Mirror) ([]*mirrorSyncResult, bo
// If there is still an error (or there always was an error) // If there is still an error (or there always was an error)
if err != nil { if err != nil {
log.Error("Failed to update mirror repository %-v:\nStdout: %s\nStderr: %s\nErr: %v", m.Repo, stdoutMessage, stderrMessage, err) log.Error("SyncMirrors [repo: %-v]: failed to update mirror repository:\nStdout: %s\nStderr: %s\nErr: %v", m.Repo, stdoutMessage, stderrMessage, err)
desc := fmt.Sprintf("Failed to update mirror repository '%s': %s", repoPath, stderrMessage) desc := fmt.Sprintf("Failed to update mirror repository '%s': %s", repoPath, stderrMessage)
if err = admin_model.CreateRepositoryNotice(desc); err != nil { if err = admin_model.CreateRepositoryNotice(desc); err != nil {
log.Error("CreateRepositoryNotice: %v", err) log.Error("CreateRepositoryNotice: %v", err)
@@ -252,13 +252,13 @@ func runSync(ctx context.Context, m *repo_model.Mirror) ([]*mirrorSyncResult, bo
gitRepo, err := git.OpenRepository(repoPath) gitRepo, err := git.OpenRepository(repoPath)
if err != nil { if err != nil {
log.Error("OpenRepository: %v", err) log.Error("SyncMirrors [repo: %-v]: failed to OpenRepository: %v", m.Repo, err)
return nil, false return nil, false
} }
log.Trace("SyncMirrors [repo: %-v]: syncing releases with tags...", m.Repo) log.Trace("SyncMirrors [repo: %-v]: syncing releases with tags...", m.Repo)
if err = repo_module.SyncReleasesWithTags(m.Repo, gitRepo); err != nil { if err = repo_module.SyncReleasesWithTags(m.Repo, gitRepo); err != nil {
log.Error("Failed to synchronize tags to releases for repository: %v", err) log.Error("SyncMirrors [repo: %-v]: failed to synchronize tags to releases: %v", m.Repo, err)
} }
if m.LFS && setting.LFS.StartServer { if m.LFS && setting.LFS.StartServer {
@@ -266,14 +266,14 @@ func runSync(ctx context.Context, m *repo_model.Mirror) ([]*mirrorSyncResult, bo
endpoint := lfs.DetermineEndpoint(remoteAddr.String(), m.LFSEndpoint) endpoint := lfs.DetermineEndpoint(remoteAddr.String(), m.LFSEndpoint)
lfsClient := lfs.NewClient(endpoint, nil) lfsClient := lfs.NewClient(endpoint, nil)
if err = repo_module.StoreMissingLfsObjectsInRepository(ctx, m.Repo, gitRepo, lfsClient); err != nil { if err = repo_module.StoreMissingLfsObjectsInRepository(ctx, m.Repo, gitRepo, lfsClient); err != nil {
log.Error("Failed to synchronize LFS objects for repository: %v", err) log.Error("SyncMirrors [repo: %-v]: failed to synchronize LFS objects for repository: %v", m.Repo, err)
} }
} }
gitRepo.Close() gitRepo.Close()
log.Trace("SyncMirrors [repo: %-v]: updating size of repository", m.Repo) log.Trace("SyncMirrors [repo: %-v]: updating size of repository", m.Repo)
if err := models.UpdateRepoSize(db.DefaultContext, m.Repo); err != nil { if err := models.UpdateRepoSize(db.DefaultContext, m.Repo); err != nil {
log.Error("Failed to update size for mirror repository: %v", err) log.Error("SyncMirrors [repo: %-v]: failed to update size for mirror repository: %v", m.Repo, err)
} }
if m.Repo.HasWiki() { if m.Repo.HasWiki() {
@@ -291,7 +291,7 @@ func runSync(ctx context.Context, m *repo_model.Mirror) ([]*mirrorSyncResult, bo
remoteAddr, remoteErr := git.GetRemoteAddress(ctx, wikiPath, m.GetRemoteName()) remoteAddr, remoteErr := git.GetRemoteAddress(ctx, wikiPath, m.GetRemoteName())
if remoteErr != nil { if remoteErr != nil {
log.Error("GetRemoteAddress Error %v", remoteErr) log.Error("SyncMirrors [repo: %-v Wiki]: unable to get GetRemoteAddress Error %v", m.Repo, remoteErr)
} }
// sanitize the output, since it may contain the remote address, which may // sanitize the output, since it may contain the remote address, which may
@@ -302,7 +302,7 @@ func runSync(ctx context.Context, m *repo_model.Mirror) ([]*mirrorSyncResult, bo
// Now check if the error is a resolve reference due to broken reference // Now check if the error is a resolve reference due to broken reference
if strings.Contains(stderrMessage, "unable to resolve reference") && strings.Contains(stderrMessage, "reference broken") { if strings.Contains(stderrMessage, "unable to resolve reference") && strings.Contains(stderrMessage, "reference broken") {
log.Warn("Failed to update mirror wiki repository %-v due to broken references:\nStdout: %s\nStderr: %s\nErr: %v\nAttempting Prune", m.Repo, stdoutMessage, stderrMessage, err) log.Warn("SyncMirrors [repo: %-v Wiki]: failed to update mirror wiki repository due to broken references:\nStdout: %s\nStderr: %s\nErr: %v\nAttempting Prune", m.Repo, stdoutMessage, stderrMessage, err)
err = nil err = nil
// Attempt prune // Attempt prune
@@ -325,7 +325,7 @@ func runSync(ctx context.Context, m *repo_model.Mirror) ([]*mirrorSyncResult, bo
// If there is still an error (or there always was an error) // If there is still an error (or there always was an error)
if err != nil { if err != nil {
log.Error("Failed to update mirror repository wiki %-v:\nStdout: %s\nStderr: %s\nErr: %v", m.Repo, stdoutMessage, stderrMessage, err) log.Error("SyncMirrors [repo: %-v Wiki]: failed to update mirror repository wiki:\nStdout: %s\nStderr: %s\nErr: %v", m.Repo, stdoutMessage, stderrMessage, err)
desc := fmt.Sprintf("Failed to update mirror repository wiki '%s': %s", wikiPath, stderrMessage) desc := fmt.Sprintf("Failed to update mirror repository wiki '%s': %s", wikiPath, stderrMessage)
if err = admin_model.CreateRepositoryNotice(desc); err != nil { if err = admin_model.CreateRepositoryNotice(desc); err != nil {
log.Error("CreateRepositoryNotice: %v", err) log.Error("CreateRepositoryNotice: %v", err)
@@ -339,7 +339,7 @@ func runSync(ctx context.Context, m *repo_model.Mirror) ([]*mirrorSyncResult, bo
log.Trace("SyncMirrors [repo: %-v]: invalidating mirror branch caches...", m.Repo) log.Trace("SyncMirrors [repo: %-v]: invalidating mirror branch caches...", m.Repo)
branches, _, err := git.GetBranchesByPath(m.Repo.RepoPath(), 0, 0) branches, _, err := git.GetBranchesByPath(m.Repo.RepoPath(), 0, 0)
if err != nil { if err != nil {
log.Error("GetBranches: %v", err) log.Error("SyncMirrors [repo: %-v]: failed to GetBranches: %v", m.Repo, err)
return nil, false return nil, false
} }
@@ -360,12 +360,12 @@ func SyncPullMirror(ctx context.Context, repoID int64) bool {
return return
} }
// There was a panic whilst syncMirrors... // There was a panic whilst syncMirrors...
log.Error("PANIC whilst syncMirrors[%d] Panic: %v\nStacktrace: %s", repoID, err, log.Stack(2)) log.Error("PANIC whilst SyncMirrors[repo_id: %d] Panic: %v\nStacktrace: %s", repoID, err, log.Stack(2))
}() }()
m, err := repo_model.GetMirrorByRepoID(repoID) m, err := repo_model.GetMirrorByRepoID(repoID)
if err != nil { if err != nil {
log.Error("GetMirrorByRepoID [%d]: %v", repoID, err) log.Error("SyncMirrors [repo_id: %v]: unable to GetMirrorByRepoID: %v", repoID, err)
return false return false
} }
@@ -381,7 +381,7 @@ func SyncPullMirror(ctx context.Context, repoID int64) bool {
log.Trace("SyncMirrors [repo: %-v]: Scheduling next update", m.Repo) log.Trace("SyncMirrors [repo: %-v]: Scheduling next update", m.Repo)
m.ScheduleNextUpdate() m.ScheduleNextUpdate()
if err = repo_model.UpdateMirror(m); err != nil { if err = repo_model.UpdateMirror(m); err != nil {
log.Error("UpdateMirror [%d]: %v", m.RepoID, err) log.Error("SyncMirrors [repo: %-v]: failed to UpdateMirror with next update date: %v", m.Repo, err)
return false return false
} }
@@ -392,7 +392,7 @@ func SyncPullMirror(ctx context.Context, repoID int64) bool {
log.Trace("SyncMirrors [repo: %-v]: %d branches updated", m.Repo, len(results)) log.Trace("SyncMirrors [repo: %-v]: %d branches updated", m.Repo, len(results))
gitRepo, err = git.OpenRepositoryCtx(ctx, m.Repo.RepoPath()) gitRepo, err = git.OpenRepositoryCtx(ctx, m.Repo.RepoPath())
if err != nil { if err != nil {
log.Error("OpenRepository [%d]: %v", m.RepoID, err) log.Error("SyncMirrors [repo: %-v]: unable to OpenRepository: %v", m.Repo, err)
return false return false
} }
defer gitRepo.Close() defer gitRepo.Close()
@@ -419,7 +419,7 @@ func SyncPullMirror(ctx context.Context, repoID int64) bool {
} }
commitID, err := gitRepo.GetRefCommitID(result.refName) commitID, err := gitRepo.GetRefCommitID(result.refName)
if err != nil { if err != nil {
log.Error("gitRepo.GetRefCommitID [repo_id: %d, ref_name: %s]: %v", m.RepoID, result.refName, err) log.Error("SyncMirrors [repo: %-v]: unable to GetRefCommitID [ref_name: %s]: %v", m.Repo, result.refName, err)
continue continue
} }
notification.NotifySyncPushCommits(m.Repo.MustOwner(), m.Repo, &repo_module.PushUpdateOptions{ notification.NotifySyncPushCommits(m.Repo.MustOwner(), m.Repo, &repo_module.PushUpdateOptions{
@@ -440,17 +440,17 @@ func SyncPullMirror(ctx context.Context, repoID int64) bool {
// Push commits // Push commits
oldCommitID, err := git.GetFullCommitID(gitRepo.Path, result.oldCommitID) oldCommitID, err := git.GetFullCommitID(gitRepo.Path, result.oldCommitID)
if err != nil { if err != nil {
log.Error("GetFullCommitID [%d]: %v", m.RepoID, err) log.Error("SyncMirrors [repo: %-v]: unable to get GetFullCommitID[%s]: %v", m.Repo, result.oldCommitID, err)
continue continue
} }
newCommitID, err := git.GetFullCommitID(gitRepo.Path, result.newCommitID) newCommitID, err := git.GetFullCommitID(gitRepo.Path, result.newCommitID)
if err != nil { if err != nil {
log.Error("GetFullCommitID [%d]: %v", m.RepoID, err) log.Error("SyncMirrors [repo: %-v]: unable to get GetFullCommitID [%s]: %v", m.Repo, result.newCommitID, err)
continue continue
} }
commits, err := gitRepo.CommitsBetweenIDs(newCommitID, oldCommitID) commits, err := gitRepo.CommitsBetweenIDs(newCommitID, oldCommitID)
if err != nil { if err != nil {
log.Error("CommitsBetweenIDs [repo_id: %d, new_commit_id: %s, old_commit_id: %s]: %v", m.RepoID, newCommitID, oldCommitID, err) log.Error("SyncMirrors [repo: %-v]: unable to get CommitsBetweenIDs [new_commit_id: %s, old_commit_id: %s]: %v", m.Repo, newCommitID, oldCommitID, err)
continue continue
} }
@@ -472,12 +472,12 @@ func SyncPullMirror(ctx context.Context, repoID int64) bool {
// Get latest commit date and update to current repository updated time // Get latest commit date and update to current repository updated time
commitDate, err := git.GetLatestCommitTime(m.Repo.RepoPath()) commitDate, err := git.GetLatestCommitTime(m.Repo.RepoPath())
if err != nil { if err != nil {
log.Error("GetLatestCommitDate [%d]: %v", m.RepoID, err) log.Error("SyncMirrors [repo: %-v]: unable to GetLatestCommitDate: %v", m.Repo, err)
return false return false
} }
if err = repo_model.UpdateRepositoryUpdatedTime(m.RepoID, commitDate); err != nil { if err = repo_model.UpdateRepositoryUpdatedTime(m.RepoID, commitDate); err != nil {
log.Error("Update repository 'updated_unix' [%d]: %v", m.RepoID, err) log.Error("SyncMirrors [repo: %-v]: unable to update repository 'updated_unix': %v", m.Repo, err)
return false return false
} }

View File

@@ -14,7 +14,7 @@
<div class="inline required field {{if .Err_Type}}error{{end}}"> <div class="inline required field {{if .Err_Type}}error{{end}}">
<label>{{.i18n.Tr "admin.auths.auth_type"}}</label> <label>{{.i18n.Tr "admin.auths.auth_type"}}</label>
<div class="ui selection type dropdown"> <div class="ui selection type dropdown">
<input type="hidden" id="auth_type" name="type" value="{{.type.Int}}"> <input type="hidden" id="auth_type" name="type" value="{{.type}}">
<div class="text">{{.CurrentTypeName}}</div> <div class="text">{{.CurrentTypeName}}</div>
{{svg "octicon-triangle-down" 14 "dropdown icon"}} {{svg "octicon-triangle-down" 14 "dropdown icon"}}
<div class="menu"> <div class="menu">

View File

@@ -2,8 +2,8 @@
<div class="inline required field"> <div class="inline required field">
<label>{{.i18n.Tr "admin.auths.oauth2_provider"}}</label> <label>{{.i18n.Tr "admin.auths.oauth2_provider"}}</label>
<div class="ui selection type dropdown"> <div class="ui selection type dropdown">
<input type="hidden" id="oauth2_provider" name="oauth2_provider" value="{{.oauth2_provider.Name}}"> <input type="hidden" id="oauth2_provider" name="oauth2_provider" value="{{.oauth2_provider}}">
<div class="text">{{.oauth2_provider.Name}}</div> <div class="text">{{.oauth2_provider}}</div>
{{svg "octicon-triangle-down" 14 "dropdown icon"}} {{svg "octicon-triangle-down" 14 "dropdown icon"}}
<div class="menu"> <div class="menu">
{{range .OAuth2Providers}} {{range .OAuth2Providers}}

View File

@@ -5,7 +5,7 @@
{{template "base/alert" .}} {{template "base/alert" .}}
{{if .NeedUpdate}} {{if .NeedUpdate}}
<div class="ui negative message flash-error"> <div class="ui negative message flash-error">
<p>{{.i18n.Tr "admin.dashboard.new_version_hint" (.RemoteVersion | Str2html) (AppVer | Str2html)}}</p> <p>{{(.i18n.Tr "admin.dashboard.new_version_hint" .RemoteVersion AppVer) | Str2html}}</p>
</div> </div>
{{end}} {{end}}
<h4 class="ui top attached header"> <h4 class="ui top attached header">

View File

@@ -125,7 +125,7 @@
<div class="column"> <div class="column">
{{if $.Permission.CanRead $.UnitTypeIssues}} {{if $.Permission.CanRead $.UnitTypeIssues}}
<div class="ui link list"> <div class="ui link list">
<a class="item ref-in-new-issue" href="{{.RepoLink}}/issues/new?body={{.Repository.HTMLURL}}{{printf "/src/commit/" }}{{PathEscape .CommitID}}/{{PathEscapeSegments .TreePath}}">{{.i18n.Tr "repo.issues.context.reference_issue"}}</a> <a class="item ref-in-new-issue" href="{{.RepoLink}}/issues/new?body={{.Repository.HTMLURL}}{{printf "/src/commit/" }}{{PathEscape .CommitID}}/{{PathEscapeSegments .TreePath}}" rel="nofollow noindex">{{.i18n.Tr "repo.issues.context.reference_issue"}}</a>
</div> </div>
{{end}} {{end}}
<div class="ui link list"> <div class="ui link list">