Enable support of raw html Hugo disables html element rendering by default, we need to enable it.
Enable for an entire site Add below in the config file
[markup.goldmark.renderer] unsafe= true Enable for a single page Create layout/shortcodes/unsafe.html with the following content: {{ .Inner }}
In the markdown page, use it like this,
{{< unsafe >}} <ul> <li>First item</li> <li>Second item <li>Third item</li> </ul> {{< /unsafe >}} above will be rendered like this, First item Second item Third item Reference How To Escape Shortcode In Hugo Template Configure Markup Create Your Own Shortcodes Enable unsafe=true for a single page
Create a S3 bucket, with name tw-testbucket-2021abc Attach bucket policy below, this can be generated with help of AWS Policy Generator { "Id": "Policy1627604722484", "Version": "2012-10-17", "Statement": [ { "Sid": "Stmt1627604720916", "Action": [ "s3:PutObject" ], "Effect": "Deny", "Resource": "arn:aws:s3:::tw-testbucket-2021abc/*", "Condition": { "StringNotEquals": { "s3:x-amz-server-side-encryption": "aws:kms" } }, "Principal": "*" } ] } Upload file without encryption enabled
The upload will failed with Access Denied error
Upload file with encryption enabled
In this exercise, I will try few ways to access resources in Account A for a user in Account B.
Setup Have two accounts ready, Account A and Account B Go to Account A console Create a Role with “Another AWS account” as the type of trusted entity, role name as crossaccountrole Attach policies, for example AmazonS3FullAccess Access Account A resouce from Account B Access from console Go to Account B console Click switch role Provide account id of Account A and the role name of crossaccountrole created earlier Now we should be able to access S3 in Account B Or use https://signin.
Recently I tested with migrating wordpress site to Hugo, and host it in AWS s3 static website. In this excerise, I will use Route 53 for the dns service and Cloudfront for content delivery, before starting, a working s3 static website should be ready, please refer to How to Create Site With Hugo to see how to setup a hugo site in S3.
Request a certificate in AWS Certificate Manager Add domain names: *.
Install hugo on Centos7 Add epel repo The Hugo package can be found at https://copr.fedorainfracloud.org/coprs/daftaupe/hugo/, take the correct version, and place it in /etc/yum.repos.d/hugo.repo
vim /etc/yum.repos.d/hugo.repo # add content below the file [copr:copr.fedorainfracloud.org:daftaupe:hugo] name=Copr repo for hugo owned by daftaupe baseurl=https://download.copr.fedorainfracloud.org/results/daftaupe/hugo/epel-7-$basearch/ type=rpm-md skip_if_unavailable=True gpgcheck=1 gpgkey=https://download.copr.fedorainfracloud.org/results/daftaupe/hugo/pubkey.gpg repo_gpgcheck=0 enabled=1 enabled_metadata=1 Install yum -y install hugo hugo version Quick start # Create a new site hugo new site mysite # Add a Theme cd mysite git init git submodule add https://github.
There are two different set of APIs for managing atlassian users, so there will be different ways to deactivate a user.
Deactivate a user from Admin portal Go to https://admin.atlassian.com/o/{org_id}/members Go to Directory -> Managed accounts Click the user Click Deactivate account button Deactivate a user with User Management API curl --request POST \ --url 'https://api.atlassian.com/users/{account_id}/manage/lifecycle/disable' \ --header 'Authorization: Bearer <access_token>' \ --header 'Content-Type: application/json' \ Deactivate a user with User Provisioning API if a user was provisioned by User Provisioning API, the previous step will not work, it has to be deactivated with User Provisioning API, for example, we need to deactivate a user test@example.
In this exercise, we will try to setup git and use ssh to connect the repo.
Download and install git Open Git bash window Generate the keys ssh-keygen # this will generate an private key (id_rsa) and public key (id_rsa.pub) Add the content of public key (id_rsa.pub) to git profile settings Run git clone git clone ssh://git@git_repo_url:7999/test.git
In this exercise, we will try to mount s3 to local system with s3fs package.
Mount the s3 bucket to local directory aws configure aws s3 mb s3://testbucket sudo cp -r ~/.aws /root # install s3fs sudo yum install s3fs-fuse -y mkdir /mnt/s3data sudo s3fs {BUCKET_name} /mnt/s3data -o allow_other -o default_acl=public-read -o use_cache=/tmp/s3fs echo "<html><h1>test</h1><html>" > index.html #check file in local folder and s3 ll /mnt/s3data aws s3 ls s3://{BUCKET_name} Access the s3 bucket data from Docker Container Once mounted we can access the data in a container as well
Sometimes, we have a requirement which needs to sync one git repo to another location whenever there is a change in primary repo.
Sync the repo from primary to secondary git clone --mirror https://primary_repo_url/primary.git cd primary.git git remote add --mirror=fetch secondary https://secondary_repo_url/secondary.git git fetch origin git push secondary --all git push secondary --tags Sync it automatically To achieve this, we can set up a Job in Jenkins or Teamcity to monitor if any commit in primary repo, and then run script below
Sometimes, an error occurred (Failed to execute ‘setItem’ on ‘Storage’: Setting the value of ‘xxxxx’ exceeded the quota Vue) when adding data to local storage due to storage being full, but not sure which one uses more and how much it is, Found below code which is very helpful.
one line version var _lsTotal=0,_xLen,_x;for(_x in localStorage){ if(!localStorage.hasOwnProperty(_x)){continue;} _xLen= ((localStorage[_x].length + _x.length)* 2);_lsTotal+=_xLen; console.log(_x.substr(0,50)+" = "+ (_xLen/1024).toFixed(2)+" KB")};console.log("Total = " + (_lsTotal / 1024).