Skip to content
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
8 changes: 8 additions & 0 deletions .devcontainer/devcontainer.json
Original file line number Diff line number Diff line change
@@ -0,0 +1,8 @@
{
"name": "Azure Codespace",
"features": {
"ghcr.io/devcontainers/features/azure-cli:1": {},
"ghcr.io/devcontainers/features/terraform:1": {},
"ghcr.io/devcontainers/features/aws-cli:1": {}
}
}
5 changes: 5 additions & 0 deletions .gitignore
Original file line number Diff line number Diff line change
Expand Up @@ -12,3 +12,8 @@ __pycache__/

# SEO planning document
SEO.md

# Add this line to your .gitignore
.terraform/
*.tfstate
*.tfstate.backup
123 changes: 123 additions & 0 deletions azure/README.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,123 @@
## Preparation

### Set up azure cli and terraform in Github Codespaces
Step 1: Create .devcontainer/devcontainer.json
Step 2: add the below content in the json file
```bash
{
"name": "Azure Codespace",
"features": {
"ghcr.io/devcontainers/features/azure-cli:1": {},
"ghcr.io/devcontainers/features/terraform:1": {},
"ghcr.io/devcontainers/features/aws-cli:1": {}
}
}
```
Step 3: After editing devcontainer.json
- Push changes to GitHub
- In Codespace click Ctrl + Shift + P and type Codespaces: Rebuild Container and press enter

### Set up azure login in Github Codespaces
Step 1: Run the below command in the Codespace terminal
```bash
az login --use-device-code
```
Step 2: After login, set up the correct subscription, run below command in terminal to see all the available subscription
```bash
az account list --output table
```
Step 3: Set the correct subscription explicitly
```bash
az account set --subscription <subscriptionid>
```
- Verify the subscription has been set up correctly
```bash
az account show --query "{name:name, id:id, user:user.name}"

or
echo "SUBSCRIPTION_ID=$SUBSCRIPTION_ID"
```
Step 4: Register resource provider if it is not registered.
- ✅ Check provider status
```bash
az provider show \
--namespace Microsoft.Storage \
--query "registrationState"

```
- ❌ If it says: NotRegistered
-🔧 Fix (this is safe)
```bash
az provider show --namespace Microsoft.Storage --query "registrationState"

```
- Wait 1–2 minutes, then confirm
```bash
az provider show --namespace Microsoft.Storage --query "registrationState"
```
- It must say: Registered

### Enable App Service for azure subscription

Step 1: Register Microsoft.Web
- Run this once per subscription:
```bash
az provider register --namespace Microsoft.Web
```
Step 2: Wait for registration to complete
- Check status:

```bash
az provider show \
--namespace Microsoft.Web \
--query "registrationState" \
--output tsv
```
- You should see: Registered

### Codespaces: Rebuild Container
```bash
az login
az account list -o table
az account set --subscription <SUB_ID>
```

### Generate unique suffix for resource naming
```bash
RANDOM_SUFFIX=$(openssl rand -hex 3)
```
### Set environment variables for Azure resources
```bash
export SUBSCRIPTION_ID=$(az account show --query id --output tsv)
export RESOURCE_GROUP="rg-storage-demo-$(openssl rand -hex 3)"
export LOCATION="eastus"
```

### Create storage account if necessary for the project
```bash
export STORAGE_ACCOUNT="sa$(openssl rand -hex 6)"
```

### Create resource group for storage resources
```bash
az group create \
--name ${RESOURCE_GROUP} \
--location ${LOCATION} \
--tags purpose=demo environment=learning

echo "✅ Resource group created: ${RESOURCE_GROUP}"
echo "✅ Storage account name: ${STORAGE_ACCOUNT}"
```


### Project studied but yet to run

#### Container
- simple-web-container-aci-registry
- simple-container-deployment-container-apps
- secure-serverless-microservices
- serverless-containers-event-grid-aci --check event grid knowledge


#### Event-Grid
- simple-event-notifications-event-grid-functions
Empty file.
Original file line number Diff line number Diff line change
Expand Up @@ -148,7 +148,7 @@ echo "✅ Resource group created: ${RESOURCE_GROUP}"

```bash
# Create a simple table structure for demonstration
az sql query \
az sql db query \
--server ${SQL_SERVER_NAME} \
--database ${SQL_DATABASE_NAME} \
--auth-type SqlPassword \
Expand Down Expand Up @@ -183,7 +183,7 @@ echo "✅ Resource group created: ${RESOURCE_GROUP}"
--name ${APP_SERVICE_PLAN} \
--resource-group ${RESOURCE_GROUP} \
--location ${LOCATION} \
--sku B1 \
--sku B1\
--is-linux false

echo "✅ App Service Plan created: ${APP_SERVICE_PLAN}"
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -192,7 +192,7 @@ cd terraform/

# Destroy all resources
terraform destroy -var="sql_admin_password=SecurePass123!" \
-var="location=eastus"
-var="location=westus"
```

### Using Bash Scripts
Expand Down
8 changes: 4 additions & 4 deletions azure/basic-file-storage-blob-portal/code/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -62,13 +62,13 @@ terraform init
terraform plan \
-var="resource_group_name=rg-storage-demo" \
-var="location=eastus" \
-var="storage_account_prefix=mystorageacct"
-var="storage_account_name=mystorageacct"

# Apply the configuration
terraform apply \
-var="resource_group_name=rg-storage-demo" \
-var="location=eastus" \
-var="storage_account_prefix=mystorageacct"
-var="storage_account_name=mystorageacct"

# View outputs
terraform output
Expand Down Expand Up @@ -111,7 +111,7 @@ Customize your deployment by modifying variables in `terraform/variables.tf` or
terraform apply \
-var="resource_group_name=my-rg" \
-var="location=westus2" \
-var="storage_account_prefix=mycompany" \
-var="storage_account_name=mycompany" \
-var="sku_name=Standard_GRS" \
-var="access_tier=Cool"
```
Expand Down Expand Up @@ -206,7 +206,7 @@ cd terraform/
terraform destroy \
-var="resource_group_name=rg-storage-demo" \
-var="location=eastus" \
-var="storage_account_prefix=mystorageacct"
-var="storage_account_account=mystorageacct"
```

### Using Bash Scripts
Expand Down

Some generated files are not rendered by default. Learn more about how customized files appear on GitHub.

Original file line number Diff line number Diff line change
@@ -0,0 +1 @@
{"app": "demo", "environment": "test", "version": "1.0"}
Original file line number Diff line number Diff line change
Expand Up @@ -202,7 +202,7 @@ resource "azurerm_role_assignment" "current_user_blob_contributor" {
principal_id = data.azurerm_client_config.current.object_id

# Specify principal type for ABAC compatibility
principal_type = "ServicePrincipal"
principal_type = "User"

# Add descriptive information for audit purposes
description = "Storage Blob Data Contributor access for Terraform deployment user"
Expand Down
Original file line number Diff line number Diff line change
@@ -0,0 +1 @@
This is a sample document for testing blob storage capabilities.
Original file line number Diff line number Diff line change
Expand Up @@ -28,7 +28,7 @@ variable "location" {

validation {
condition = contains([
"East US", "East US 2", "West US", "West US 2", "West US 3",
"East US", "East US 2", "West US", "West US 2", "West US 3", "eastus",
"Central US", "North Central US", "South Central US", "West Central US",
"Canada Central", "Canada East",
"Brazil South",
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -30,8 +30,8 @@ provider "azurerm" {
# Storage provider features for enhanced blob storage management
storage {
# Prevent accidental deletion of storage containers and blobs
purge_soft_delete_on_destroy = false
recover_soft_deleted_key_vaults = false
# purge_soft_delete_on_destroy = false
# recover_soft_deleted_key_vaults = false
}

# Resource group provider features
Expand All @@ -40,7 +40,7 @@ provider "azurerm" {
prevent_deletion_if_contains_resources = false
}
}

subscription_id = "333e3c5a-b7cf-41ff-9a88-f367c6754474"
# Use Azure CLI authentication by default
# Alternatively, service principal authentication can be configured
# using environment variables or configuration blocks
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -70,16 +70,19 @@ graph TB

> **Note**: The Consumption plan includes a monthly free grant of 1 million executions and 400,000 GB-seconds, making this recipe extremely cost-effective for learning and testing scenarios.

## Preparation


### Set environment variables for Azure resources
```bash
# Set environment variables for Azure resources
# Generate unique suffix for resource names
RANDOM_SUFFIX=$(openssl rand -hex 3)

# Set environment variables
export RESOURCE_GROUP="rg-file-processing-${RANDOM_SUFFIX}"
export LOCATION="eastus"
export SUBSCRIPTION_ID=$(az account show --query id --output tsv)

# Generate unique suffix for resource names
RANDOM_SUFFIX=$(openssl rand -hex 3)


# Set resource names with unique suffix
export STORAGE_ACCOUNT="stfileproc${RANDOM_SUFFIX}"
Expand Down Expand Up @@ -157,7 +160,7 @@ echo "✅ Resource group created: ${RESOURCE_GROUP}"
--storage-account ${STORAGE_ACCOUNT} \
--consumption-plan-location ${LOCATION} \
--runtime node \
--runtime-version 18 \
--runtime-version 24 \
--functions-version 4 \
--tags purpose=file-processing environment=demo

Expand Down Expand Up @@ -196,23 +199,27 @@ echo "✅ Resource group created: ${RESOURCE_GROUP}"
# Create directory for function code
mkdir -p function-code
cd function-code

# Create function.json configuration for blob trigger
```

6. **Create function.json configuration for blob trigger**
```bash
cat > function.json << 'EOF'
{
"bindings": [
"bindings": [
{
"name": "myBlob",
"type": "blobTrigger",
"direction": "in",
"path": "uploads/{name}",
"connection": "STORAGE_CONNECTION_STRING"
"name": "myBlob",
"type": "blobTrigger",
"direction": "in",
"path": "uploads/{name}",
"connection": "STORAGE_CONNECTION_STRING"
}
]
]
}
EOF

# Create JavaScript function code for file processing
```

7. **Create JavaScript function code for file processing**
```bash
cat > index.js << 'EOF'
module.exports = async function (context, myBlob) {
const fileName = context.bindingData.name;
Expand All @@ -238,12 +245,12 @@ module.exports = async function (context, myBlob) {
context.log(`🎉 File processing workflow completed for: ${fileName}`);
};
EOF

```
echo "✅ Function code created successfully"
cd ..
```


6. **Deploy Function to Azure**:
8. **Deploy Function to Azure**:

Function deployment transfers the trigger configuration and processing logic to Azure's serverless infrastructure. Once deployed, the function automatically monitors the specified blob container and executes the processing workflow whenever new files are uploaded.

Expand Down
Original file line number Diff line number Diff line change
@@ -0,0 +1,11 @@
{
"bindings": [
{
"name": "myBlob",
"type": "blobTrigger",
"direction": "in",
"path": "uploads/{name}",
"connection": "STORAGE_CONNECTION_STRING"
}
]
}
Loading