栏目分类:
子分类:
返回
名师互学网用户登录
快速导航关闭
当前搜索
当前分类
子分类
实用工具
热门搜索
名师互学网 > IT > 软件开发 > 后端开发 > Python

python基础记录

Python 更新时间: 发布时间: IT归档 最新发布 模块sitemap 名妆网 法律咨询 聚返吧 英语巴士网 伯小乐 网商动力

python基础记录

python基础记录
  • os
  • Path
  • str
  • yaml
  • glob
  • @contextmanager
  • zip
  • numpy
  • pytorch

path = “D:/”

os
#split the name of file and the extension name,ep"file.txt",file is file name,.txt is extension name

1.os.path.splitext(“123.jpg”)

[out]: ["123",".jpg"]
# accumulate the size of file

2.os.path.getsize(path)

if os.path.isfile(path):
	print("accumulate file size")
Path

1.Path(path) / “case”

[out]: D:/case

str(Path(path)) + os.sep + “case”

[out]: D:/case
str

1.str1.join(str2)

str1 = "-"
str2 = "123"
[out]: 1-2-3
# x is the split character,number is the split time

2.str1.rsplit(x,number)

str1 = "123"
x = '1'
number = 1
[out]: ["","23"]

3.str1.join(str2.rsplit(x,number))

str1 = os.sep+"labels"+os.sep
str2 = "123"
[out]: "/labels/23"
yaml
dict_file is dictionary ,sort_keys default True,sort by the rank of the alphabet 

1.yaml.dump(dict_variable,yaml_file,sort_keys)

with open("test.yaml",'w') as f:
	yaml.dump(dict_variable,f,sort_keys=False)

2.yaml.load(yaml_file,Loader)

with open("test.yaml",'r') as f:
	files = yaml.load(f,Loader=yaml.FullLoader)
vars(opt)-> convert opt to dict

3.vars(opt)

parse = argparse.ArgumentParser()
parse.add_argument('adam', action='store_true', help='123')
parse.add_argument('-n_a', nargs='?', type=int, default=1)
parse.add_argument('--w-b', nargs='+', type=str, default='oo')
opt = parse.parse_args()
opt.epoch = 30
opt.global_rank = 1
opt.b = opt.adam
[out]: {'adam': True, 'n_a': 1, 'w_b': 'oo', 'global_rank': 1, 'b': True, 'epoch': 30}
glob

test_a = glob.glob(“D:”+os.sep+".")
test_b = glob.iglob(“D:”+os.sep+".")

glob.glob return list
glob.iglob return generator object _iglob
for val in test_b:
	pass
for idx,val in enumerate(test_b):
	pass 
@contextmanager
context mananger,package code
@contextmanager
def torch_distributed_zero_first(local_rank: int):
    # if main process not lock
    # module "torch.distributed parallel" support,only linux,not support windows
    if local_rank not in [-1, 0]:
        torch.distributed().barrier()
    yield
    # if child process is lock
    if local_rank == 0:
        torch.distributed().barrier()
with torch_distributed_zero_first(rank):
     attempt_download(weights)
execute 1.if local_rank not in [-1,0]
		2.attempt_download(weights)
		3.if local_rank == 0 
zip
zip and unzip

zip(x,y)

# zip(x,y) -> return list wrapped tuple 
x = [1,2]
y = [3,4]
for i in zip(x,y):
	print(i)
[out]: (1,3) 
	   (2,4)

zip(*(x,y))

# zip(*(x,y)) -> return two-dimensional matrix
x = [1,2]
y = [3,4]
for i,j in zip(*(x,y)):
	print(i,j)
[out]: 1 3
	   2 4
numpy

np.transpose(x,axes)

n_array = np.arange(30).reshpe(2,3,5) # shape (2,3,5)
n_array.transpose(1,0,2) # axis 0 1 exchange
# or you can write the following format 
np.transpose(n_array,(1,0,2))  # shape (3,2,5)

np.expand_dims(x,axis)

n_array = np.arange(10) # shape (10,)
np.expand_dims(n_array,axis=0) # shape (1,10)
np.expand_dims(n_array,axis=1) # shape (10,1)
pytorch

torch.unsqueeze(tensor,dim)

tensor = torch.as_tensor(np.arange(10))
torch.unsqueeze(tensor,dim=1) # torch.Size (10,1)
torch.unsqueeze(tensor,dim=0) # torch.Size (1,10)
转载请注明:文章转载自 www.mshxw.com
本文地址:https://www.mshxw.com/it/316211.html
我们一直用心在做
关于我们 文章归档 网站地图 联系我们

版权所有 (c)2021-2022 MSHXW.COM

ICP备案号:晋ICP备2021003244-6号